You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 28, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: CHANGELOG.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,6 +18,7 @@
18
18
*[CHANGE] Increased `CortexBadRuntimeConfig` alert severity to `critical` and removed support for `cortex_overrides_last_reload_successful` metric (was removed in Cortex 1.3.0). #335
19
19
*[CHANGE] Grafana 'min step' changed to 15s so dashboard show better detail. #340
20
20
*[CHANGE] Removed `CortexCacheRequestErrors` alert. This alert was not working because the legacy Cortex cache client instrumentation doesn't track errors. #346
*[ENHANCEMENT] cortex-mixin: Make `cluster_namespace_deployment:kube_pod_container_resource_requests_{cpu_cores,memory_bytes}:sum` backwards compatible with `kube-state-metrics` v2.0.0. #317
22
23
*[ENHANCEMENT] Added documentation text panels and descriptions to reads and writes dashboards. #324
23
24
*[ENHANCEMENT] Dashboards: defined container functions for common resources panels: containerDiskWritesPanel, containerDiskReadsPanel, containerDiskSpaceUtilization. #331
@@ -80,7 +81,7 @@
80
81
- Cortex / Queries: added bucket index load operations and latency (available only when bucket index is enabled)
81
82
- Alerts: added "CortexBucketIndexNotUpdated" (bucket index only) and "CortexTenantHasPartialBlocks"
82
83
*[ENHANCEMENT] The name of the overrides configmap is now customisable via `$._config.overrides_configmap`. #244
83
-
*[ENHANCEMENT] Added flag to control usage of bucket-index, and enable it by default when using blocks. #254
84
+
*[ENHANCEMENT] Added flag to control usage of bucket-index and disable it by default when using blocks. #254
84
85
*[ENHANCEMENT] Added the alert `CortexIngesterHasUnshippedBlocks`. #255
85
86
*[BUGFIX] Honor configured `per_instance_label` in all panels. #239
86
87
*[BUGFIX]`CortexRequestLatency` alert now ignores long-running requests on query-scheduler. #242
Copy file name to clipboardExpand all lines: cortex-mixin/docs/playbooks.md
+29-8Lines changed: 29 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,10 +50,12 @@ How the limit is **configured**:
50
50
- The configured limit can be queried via `cortex_ingester_instance_limits{limit="max_series"}`
51
51
52
52
How to **fix**:
53
+
1.**Temporarily increase the limit**<br />
54
+
If the actual number of series is very close or already hit the limit, or if you foresee the ingester will hit the limit before dropping the stale series as effect of the scale up, you should also temporarily increase the limit.
55
+
1.**Check if shuffle-sharding shard size is correct**<br />
56
+
When shuffle-sharding is enabled, we target to 100K series / tenant / ingester. You can run `avg by (user) (cortex_ingester_memory_series_created_total{namespace="<namespace>"} - cortex_ingester_memory_series_removed_total{namespace="<namespace>"}) > 100000` to find out tenants with > 100K series / ingester. You may want to increase the shard size for these tenants.
53
57
1.**Scale up ingesters**<br />
54
58
Scaling up ingesters will lower the number of series per ingester. However, the effect of this change will take up to 4h, because after the scale up we need to wait until all stale series are dropped from memory as the effect of TSDB head compaction, which could take up to 4h (with the default config, TSDB keeps in-memory series up to 3h old and it gets compacted every 2h).
55
-
2.**Temporarily increase the limit**<br />
56
-
If the actual number of series is very close or already hit the limit, or if you foresee the ingester will hit the limit before dropping the stale series as effect of the scale up, you should also temporarily increase the limit.
57
59
58
60
### CortexIngesterReachingTenantsLimit
59
61
@@ -402,17 +404,36 @@ How to **investigate**:
402
404
- Check the latest runtime config update (it's likely to be broken)
403
405
- Check Cortex logs to get more details about what's wrong with the config
404
406
405
-
### CortexQuerierCapacityFull
406
-
407
-
_TODO: this playbook has not been written yet._
408
-
409
407
### CortexFrontendQueriesStuck
410
408
411
-
_TODO: this playbook has not been written yet._
409
+
This alert fires if Cortex is running without query-scheduler and queries are piling up in the query-frontend queue.
410
+
411
+
The procedure to investigate it is the same as the one for [`CortexSchedulerQueriesStuck`](#CortexSchedulerQueriesStuck): please see the other playbook for more details.
412
412
413
413
### CortexSchedulerQueriesStuck
414
414
415
-
_TODO: this playbook has not been written yet._
415
+
This alert fires if queries are piling up in the query-scheduler.
416
+
417
+
How it **works**:
418
+
- A query-frontend API endpoint is called to execute a query
419
+
- The query-frontend enqueues the request to the query-scheduler
420
+
- The query-scheduler is responsible for dispatching enqueued queries to idle querier workers
421
+
- The querier runs the query, sends the response back directly to the query-frontend and notifies the query-scheduler that it can process another query
-`panic`: look for the stack trace in the logs and investigate from there
427
+
- Is QPS increased?
428
+
- Scale up queriers to satisfy the increased workload
429
+
- Is query latency increased?
430
+
- An increased latency reduces the number of queries we can run / sec: once all workers are busy, new queries will pile up in the queue
431
+
- Temporarily scale up queriers to try to stop the bleed
432
+
- Check if a specific tenant is running heavy queries
433
+
- Run `sum by (user) (cortex_query_scheduler_queue_length{namespace="<namespace>"}) > 0` to find tenants with enqueued queries
434
+
- Check the `Cortex / Slow Queries` dashboard to find slow queries
435
+
- On multi-tenant Cortex cluster with **shuffle-sharing for queriers disabled**, you may consider to enable it for that specific tenant to reduce its blast radius. To enable queriers shuffle-sharding for a single tenant you need to set the `max_queriers_per_tenant` limit override for the specific tenant (the value should be set to the number of queriers assigned to the tenant).
436
+
- On multi-tenant Cortex cluster with **shuffle-sharding for queriers enabled**, you may consider to temporarily increase the shard size for affected tenants: be aware that this could affect other tenants too, reducing resources available to run other tenant queries. Alternatively, you may choose to do nothing and let Cortex return errors for that given user once the per-tenant queue is full.
0 commit comments