Grafana Interview Questions

Server-Side Expressions and Alerting

questions
Scroll to track progress

Your alert rules combine metrics from multiple datasources: error_rate from Prometheus + customer_count from Datadog + billing_active from database. Currently, you combine metrics client-side (dashboard transforms). But for alerting, you need server-side evaluation. Design a server-side expression system for complex alert logic.

Implement server-side expressions: (1) Multi-datasource queries—allow alert rules to query multiple datasources in single expression: prometheus_query + datadog_query + database_query. (2) Expression language—provide simple expression language: if(error_rate > 10% AND customer_count > 1000, alert_severity_high). (3) Cross-datasource math—combine metrics: (error_rate / customer_count) > threshold. (4) Caching of sub-queries—expensive sub-queries are cached, reused across expressions. (5) Expression optimization—rewrite expressions for efficiency: push filters to datasource level. (6) Error handling—if one datasource is slow/unavailable, expression evaluation uses fallback (previous value, default 0). (7) Conditional evaluation—if condition_A is false, skip expensive condition_B (short-circuit evaluation). Implement expression builder UI: visual builder for complex expressions. Users drag/drop query blocks, define relationships. For experts, allow raw expression syntax. Build expression testing sandbox: users test expressions against sample data before deployment. Create expression library: pre-built expressions for common patterns (error spike, latency regression, capacity threshold). Users clone and customize. Implement expression versioning: track changes, enable rollback if expression causes false positives. Create monitoring: per-expression evaluation latency. Alert if >10s (too slow for alerting). Test expressions: verify correctness with known data, test error handling, test performance under load.

Follow-up: Your expression combines Prometheus + Datadog queries. During a Datadog outage, Prometheus is fine but alert expression fails entirely (no alert fired). How would you handle partial datasource failures gracefully?

Your alert rule uses a server-side expression that's complex: 10 sub-queries, conditional branches, math operations. Expression evaluation is slow (30 seconds). Alerts are delayed 30 seconds after issue occurs. SLA requires alerting <5 seconds. Optimize the expression evaluation system.

Implement expression optimization: (1) Query parallelization—sub-queries that don't depend on each other run in parallel (Prometheus query, Datadog query in parallel, not sequentially). (2) Query result caching—cache sub-query results. If same query is evaluated twice in quick succession, reuse. (3) Lazy evaluation—only evaluate branches that matter. If first condition is false, skip second branch (short-circuit). (4) Precomputation—identify expensive sub-queries, pre-compute and store results. Expression uses pre-computed values. (5) Indexing—for database queries in expressions, ensure proper indexes. Slow database queries are often the bottleneck. (6) Compilation—compile expression to bytecode once, execute bytecode repeatedly (vs. parsing expression each time). (7) Resource limits—expression evaluation has limits: max 1000 datapoints per query, timeout 10s. Excess queries are capped. Implement expression profiler: shows execution time for each sub-query, identifies slow components. Build optimization suggestions: "sub-query A takes 20s; consider caching or simplification." For critical alerts, implement fast-track evaluation: simplified expression that evaluates in <2s, triggers immediately. Full expression runs in parallel, used for richer context. Create alerting SLA dashboard: alert latency by rule, compliance with SLA. Alert on violations.

Follow-up: Your parallel evaluation and caching reduced latency to 5s. But cache invalidation is 1min, so expressions see stale data for 55s. How do you balance caching and freshness for alerting?

A team's expression syntax is incorrect (typo in variable name). Expression evaluation errors, alert doesn't fire. Issue occurs on-call alert is silent. Design error handling and debugging for server-side expressions.

Implement expression debugging: (1) Syntax validation—validate expression syntax before deployment. Fail deployment if invalid. (2) Error messages—clear error messages: "variable 'error_count' is undefined. Available: [error_rate, error_total]." (3) Type checking—ensure operations are type-safe: can't divide string by number. Flag type mismatches. (4) Dry-run—before deploying expression, run against sample data. Show results, catch runtime errors. (5) Step-through debugging—for complex expressions, show intermediate results after each step: "sub_query_1 = 100, sub_query_2 = 200, result = 300." (6) Error alerting—if expression evaluation fails repeatedly, alert admins. "Expression alert_rule_X failed 10 times today." (7) Fallback behavior—if expression fails at alert time, use fallback: fire alert anyway (conservative), or suppress alert (liberal). Configure per-rule. Implement expression linter: scan expressions for potential issues (unused variables, unreachable branches, potential division by zero). Build debugging dashboard: shows expression evaluation history, including failures. Drill into failed evaluations to see error. Create expression testing library: unit tests for expressions. Expression owners write tests. Implement a validation checklist before deployment: syntax valid? Dry-run successful? Variables defined? Type-safe? All checks pass → deploy. Build alerting on expression errors: if expression fails during alert evaluation, trigger separate "expression error" alert.

Follow-up: Your expression falls back to "fire alert anyway" on error. But the error is a genuine bug in expression (typo). Alert fires incorrectly, tons of false positives. How do you distinguish legitimate failures from bugs?

Your expression-based alerting enables powerful logic: combine metrics, conditional evaluation, complex math. But teams are creating overly complex expressions (100+ lines, multiple nested conditions). Expressions become unmaintainable, bugs creep in. How do you encourage simplicity and maintainability?

Enforce expression complexity limits: (1) Expression size limit—max 50 lines of expression. Longer expressions are flagged for simplification. (2) Nesting limit—max 5 levels of nesting. Deep nesting is hard to understand. (3) Sub-expression extraction—break complex expression into sub-expressions (stored separately). Main expression calls sub-expressions. (4) Documentation requirement—expressions >20 lines require inline documentation: comments explaining logic. (5) Complexity score—compute complexity metric (branching factor, operation count). Alert if score is high. (6) Code review—expressions are reviewed before deployment (like code PRs). Peers check for clarity, correctness. (7) Expression library—pre-built, well-tested expressions (simplified, common patterns). Reuse instead of reimplementing complex logic. Implement expression templates: for common alert patterns (error spike, latency regression), provide templates. Teams fill in thresholds, don't write complex logic. Build refactoring suggestions: "this expression has 5 nested conditions. Suggestion: split into 2 expressions." Create expression best practices guide: "keep expressions simple and readable," "use meaningful variable names," "document complex logic." Implement expression scoring in dashboard: per-team, show average expression complexity. Teams with high complexity are encouraged to simplify. Test complexity: ensure maximum-complexity expressions still evaluate reasonably fast (no performance penalty).

Follow-up: Your complexity limits prevent expressions >50 lines. But a legitimately complex alert (real business logic) needs 100 lines. It's impossible to express. Your limits are too strict. How do you balance rules with flexibility?

Your expression-based alerting system is powerful, enabling teams to create custom alert logic. But without governance, expression logic diverges: identical business logic (error rate spike alert) is implemented 5 different ways across 5 teams. Maintenance nightmare. Design a governance system for expressions.

Implement expression governance: (1) Expression registry—centralized catalog of all expressions (Git-tracked). Teams can browse, discover existing expressions, reuse. (2) Versioning—expressions are versioned. Teams specify which version they use. Updates don't break old expressions. (3) Approval process—new expressions require approval (by data governance team or alert owner). Approval ensures quality, prevents duplication. (4) Expression owners—each expression has owner (team responsible for maintaining it). Owner is notified if issues arise. (5) Deprecation—old/incorrect expressions are marked deprecated. Users are notified, encouraged to migrate to newer versions. (6) Standards—define expression coding standards: naming conventions, structure, documentation. (7) Impact analysis—when updating expression, track which alert rules use it. Notify teams of changes. Implement expression search: find all expressions using specific metric, or matching pattern. Build expression dependency graph: shows which expressions call which sub-expressions. Visualize relationships. Create expression review checklist: quality, correctness, performance, documentation. Implement CODEOWNERS file for expressions: similar to code. Changes to critical expressions require approval from CODEOWNERS. Test governance: simulate multiple teams creating similar expressions. System detects duplicates, suggests reuse. Build metrics: expression reuse rate (% expressions reused from registry), deprecation rate (% old expressions still in use). Alert if reuse is low (suggests poor governance).

Follow-up: Your expression registry requires approval for new expressions. Approval process takes 2 weeks. A team has an urgent alert (customer data loss). They bypass registry, create unvetted expression. Bug causes false positives. How do you handle urgent vs. governance?

Your expression language is powerful (JavaScript-like: if/else, functions, loops). An engineer writes a computationally expensive expression: nested loops summing 100M datapoints. Expression evaluation takes 30 minutes, blocks alert evaluation for all other rules. System becomes unresponsive. How do you prevent resource exhaustion from expressions?

Implement expression resource limits: (1) CPU timeout—max 10 seconds per expression. Longer executions are terminated. (2) Memory limit—max 500MB per expression. Exceeding triggers OOM killer. (3) Datapoint limit—max 1M datapoints per query. Larger datasets are rejected. (4) Operation count limit—max 10M operations per expression. Excessive computation is rejected. (5) I/O limits—max 100 file I/O operations, 100 network calls per expression. (6) Queue limits—max 100 pending expressions. Excess is queued. (7) Rate limiting—max 1000 expressions/minute. Excess are rate-limited. Implement resource accounting: track CPU, memory, datapoints for each expression. Build dashboard showing resource consumption. Alert if approaching limits. Create expression complexity analysis: before deploying, estimate resource requirements (predict CPU time, memory). Alert if predicted to exceed limits. Implement resource reservation: critical expressions reserve guaranteed resources. Non-critical expressions get best-effort. For testing, provide sandbox environment with strict limits. Teams test expressions in sandbox before production deployment. Build runbook: "expression is timing out. How to debug and optimize."

Follow-up: Your resource limits prevent slow expressions, but a legitimate expression (sums all errors across 100K services) hits the 1M datapoint limit. It's impossible to run. How would you handle high-volume legitimate queries?

Want to go deeper?