Skip to content
This repository was archived by the owner on Apr 28, 2025. It is now read-only.

Commit 5612aa0

Browse files
authored
Merge branch 'main' into ruler-alerts
2 parents ba0fc52 + b266d7b commit 5612aa0

File tree

3 files changed

+26
-21
lines changed

3 files changed

+26
-21
lines changed

CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@
1818
* [CHANGE] Increased `CortexBadRuntimeConfig` alert severity to `critical` and removed support for `cortex_overrides_last_reload_successful` metric (was removed in Cortex 1.3.0). #335
1919
* [CHANGE] Grafana 'min step' changed to 15s so dashboard show better detail. #340
2020
* [CHANGE] Replace `CortexRulerFailedEvaluations` with two new alerts: `CortexRulerTooManyFailedPushes` and `CortexRulerTooManyFailedQueries`. #347
21+
* [CHANGE] Removed `CortexQuerierCapacityFull` alert. #342
2122
* [ENHANCEMENT] cortex-mixin: Make `cluster_namespace_deployment:kube_pod_container_resource_requests_{cpu_cores,memory_bytes}:sum` backwards compatible with `kube-state-metrics` v2.0.0. #317
2223
* [ENHANCEMENT] Added documentation text panels and descriptions to reads and writes dashboards. #324
2324
* [ENHANCEMENT] Dashboards: defined container functions for common resources panels: containerDiskWritesPanel, containerDiskReadsPanel, containerDiskSpaceUtilization. #331

cortex-mixin/alerts/alerts.libsonnet

Lines changed: 0 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -134,21 +134,6 @@
134134
|||,
135135
},
136136
},
137-
{
138-
alert: 'CortexQuerierCapacityFull',
139-
expr: |||
140-
prometheus_engine_queries_concurrent_max{job=~".+/(cortex|ruler|querier)"} - prometheus_engine_queries{job=~".+/(cortex|ruler|querier)"} == 0
141-
|||,
142-
'for': '5m', // We don't want to block for longer.
143-
labels: {
144-
severity: 'critical',
145-
},
146-
annotations: {
147-
message: |||
148-
{{ $labels.job }} is at capacity processing queries.
149-
|||,
150-
},
151-
},
152137
{
153138
alert: 'CortexFrontendQueriesStuck',
154139
expr: |||

cortex-mixin/docs/playbooks.md

Lines changed: 25 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -419,17 +419,36 @@ How to **investigate**:
419419
- Check the latest runtime config update (it's likely to be broken)
420420
- Check Cortex logs to get more details about what's wrong with the config
421421

422-
### CortexQuerierCapacityFull
423-
424-
_TODO: this playbook has not been written yet._
425-
426422
### CortexFrontendQueriesStuck
427423

428-
_TODO: this playbook has not been written yet._
424+
This alert fires if Cortex is running without query-scheduler and queries are piling up in the query-frontend queue.
425+
426+
The procedure to investigate it is the same as the one for [`CortexSchedulerQueriesStuck`](#CortexSchedulerQueriesStuck): please see the other playbook for more details.
429427

430428
### CortexSchedulerQueriesStuck
431429

432-
_TODO: this playbook has not been written yet._
430+
This alert fires if queries are piling up in the query-scheduler.
431+
432+
How it **works**:
433+
- A query-frontend API endpoint is called to execute a query
434+
- The query-frontend enqueues the request to the query-scheduler
435+
- The query-scheduler is responsible for dispatching enqueued queries to idle querier workers
436+
- The querier runs the query, sends the response back directly to the query-frontend and notifies the query-scheduler that it can process another query
437+
438+
How to **investigate**:
439+
- Are queriers in a crash loop (eg. OOMKilled)?
440+
- `OOMKilled`: temporarily increase queriers memory request/limit
441+
- `panic`: look for the stack trace in the logs and investigate from there
442+
- Is QPS increased?
443+
- Scale up queriers to satisfy the increased workload
444+
- Is query latency increased?
445+
- An increased latency reduces the number of queries we can run / sec: once all workers are busy, new queries will pile up in the queue
446+
- Temporarily scale up queriers to try to stop the bleed
447+
- Check if a specific tenant is running heavy queries
448+
- Run `sum by (user) (cortex_query_scheduler_queue_length{namespace="<namespace>"}) > 0` to find tenants with enqueued queries
449+
- Check the `Cortex / Slow Queries` dashboard to find slow queries
450+
- On multi-tenant Cortex cluster with **shuffle-sharing for queriers disabled**, you may consider to enable it for that specific tenant to reduce its blast radius. To enable queriers shuffle-sharding for a single tenant you need to set the `max_queriers_per_tenant` limit override for the specific tenant (the value should be set to the number of queriers assigned to the tenant).
451+
- On multi-tenant Cortex cluster with **shuffle-sharding for queriers enabled**, you may consider to temporarily increase the shard size for affected tenants: be aware that this could affect other tenants too, reducing resources available to run other tenant queries. Alternatively, you may choose to do nothing and let Cortex return errors for that given user once the per-tenant queue is full.
433452

434453
### CortexCacheRequestErrors
435454

0 commit comments

Comments
 (0)