You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 28, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: CHANGELOG.md
+10-2Lines changed: 10 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,17 +2,25 @@
2
2
3
3
## master / unreleased
4
4
5
+
*[ENHANCEMENT] Cortex-mixin: Include `cortex-gw-internal` naming variation in default `gateway` job names. #328
5
6
*[CHANGE]`namespace` template variable in dashboards now only selects namespaces for selected clusters. #311
6
7
*[CHANGE] Alertmanager: mounted overrides configmap to alertmanager too. #315
7
8
*[CHANGE] Memcached: upgraded memcached from `1.5.17` to `1.6.9`. #316
8
9
*[CHANGE]`CortexIngesterRestarts` alert severity changed from `critical` to `warning`. #321
9
10
*[CHANGE] Store-gateway: increased memory request and limit respectively from 6GB / 6GB to 12GB / 18GB. #322
10
11
*[CHANGE] Store-gateway: increased `-blocks-storage.bucket-store.max-chunk-pool-bytes` from 2GB (default) to 12GB. #322
11
-
*[ENHANCEMENT] cortex-mixin: Make `cluster_namespace_deployment:kube_pod_container_resource_requests_{cpu_cores,memory_bytes}:sum` backwards compatible with `kube-state-metrics` v2.0.0. #317
12
-
*[BUGFIX] Fixed `CortexIngesterHasNotShippedBlocks` alert false positive in case an ingester instance had ingested samples in the past, then no traffic was received for a long period and then it started receiving samples again. #308
13
12
*[CHANGE] Dashboards: added overridable `job_labels` and `cluster_labels` to the configuration object as label lists to uniquely identify jobs and clusters in the metric names and group-by lists in dashboards. #319
14
13
*[CHANGE] Dashboards: `alert_aggregation_labels` has been removed from the configuration and overriding this value has been deprecated. Instead the labels are now defined by the `cluster_labels` list, and should be overridden accordingly through that list. #319
14
+
*[CHANGE] Ingester/Ruler: set `-server.grpc-max-send-msg-size-bytes` and `-server.grpc-max-send-msg-size-bytes` to sensible default values (10MB). #326
15
+
*[CHANGE] Renamed `CortexCompactorHasNotUploadedBlocksSinceStart` to `CortexCompactorHasNotUploadedBlocks`. #334
16
+
*[CHANGE] Renamed `CortexCompactorRunFailed` to `CortexCompactorHasNotSuccessfullyRunCompaction`. #334
17
+
*[CHANGE] Renamed `CortexInconsistentConfig` alert to `CortexInconsistentRuntimeConfig` and increased severity to `critical`. #335
18
+
*[CHANGE] Increased `CortexBadRuntimeConfig` alert severity to `critical` and removed support for `cortex_overrides_last_reload_successful` metric (was removed in Cortex 1.3.0). #335
19
+
*[ENHANCEMENT] cortex-mixin: Make `cluster_namespace_deployment:kube_pod_container_resource_requests_{cpu_cores,memory_bytes}:sum` backwards compatible with `kube-state-metrics` v2.0.0. #317
15
20
*[ENHANCEMENT] Added documentation text panels and descriptions to reads and writes dashboards. #324
21
+
*[BUGFIX] Fixed `CortexIngesterHasNotShippedBlocks` alert false positive in case an ingester instance had ingested samples in the past, then no traffic was received for a long period and then it started receiving samples again. #308
22
+
*[BUGFIX] Alertmanager: fixed `--alertmanager.cluster.peers` CLI flag passed to alertmanager when HA is enabled. #329
Copy file name to clipboardExpand all lines: cortex-mixin/docs/playbooks.md
+91-21Lines changed: 91 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,11 +26,63 @@ If nothing obvious from the above, check for increased load:
26
26
27
27
### CortexIngesterReachingSeriesLimit
28
28
29
-
_TODO: this playbook has not been written yet._
29
+
This alert fires when the `max_series` per ingester instance limit is enabled and the actual number of in-memory series in a ingester is reaching the limit. Once the limit is reached, writes to the ingester will fail (5xx) for new series, while appending samples to existing ones will continue to succeed.
30
+
31
+
In case of **emergency**:
32
+
- If the actual number of series is very close or already hit the limit, then you can increase the limit via runtime config to gain some time
33
+
- Increasing the limit will increase the ingesters memory utilization. Please monitor the ingesters memory utilization via the `Cortex / Writes Resources` dashboard
34
+
35
+
How the limit is **configured**:
36
+
- The limit can be configured either on CLI (`-ingester.instance-limits.max-series`) or in the runtime config:
37
+
```
38
+
ingester_limits:
39
+
max_series: <int>
40
+
```
41
+
- The mixin configures the limit in the runtime config and can be fine-tuned via:
42
+
```
43
+
_config+:: {
44
+
ingester_instance_limits+:: {
45
+
max_series: <int>
46
+
}
47
+
}
48
+
```
49
+
- When configured in the runtime config, changes are applied live without requiring an ingester restart
50
+
- The configured limit can be queried via `cortex_ingester_instance_limits{limit="max_series"}`
51
+
52
+
How to **fix**:
53
+
1.**Scale up ingesters**<br />
54
+
Scaling up ingesters will lower the number of series per ingester. However, the effect of this change will take up to 4h, because after the scale up we need to wait until all stale series are dropped from memory as the effect of TSDB head compaction, which could take up to 4h (with the default config, TSDB keeps in-memory series up to 3h old and it gets compacted every 2h).
55
+
2.**Temporarily increase the limit**<br />
56
+
If the actual number of series is very close or already hit the limit, or if you foresee the ingester will hit the limit before dropping the stale series as effect of the scale up, you should also temporarily increase the limit.
30
57
31
58
### CortexIngesterReachingTenantsLimit
32
59
33
-
_TODO: this playbook has not been written yet._
60
+
This alert fires when the `max_tenants` per ingester instance limit is enabled and the actual number of tenants in a ingester is reaching the limit. Once the limit is reached, writes to the ingester will fail (5xx) for new tenants, while they will continue to succeed for previously existing ones.
61
+
62
+
In case of **emergency**:
63
+
- If the actual number of tenants is very close or already hit the limit, then you can increase the limit via runtime config to gain some time
64
+
- Increasing the limit will increase the ingesters memory utilization. Please monitor the ingesters memory utilization via the `Cortex / Writes Resources` dashboard
65
+
66
+
How the limit is **configured**:
67
+
- The limit can be configured either on CLI (`-ingester.instance-limits.max-tenants`) or in the runtime config:
68
+
```
69
+
ingester_limits:
70
+
max_tenants: <int>
71
+
```
72
+
- The mixin configures the limit in the runtime config and can be fine-tuned via:
73
+
```
74
+
_config+:: {
75
+
ingester_instance_limits+:: {
76
+
max_tenants: <int>
77
+
}
78
+
}
79
+
```
80
+
- When configured in the runtime config, changes are applied live without requiring an ingester restart
81
+
- The configured limit can be queried via `cortex_ingester_instance_limits{limit="max_tenants"}`
82
+
83
+
How to **fix**:
84
+
1. Ensure shuffle-sharding is enabled in the Cortex cluster
85
+
1. Assuming shuffle-sharding is enabled, scaling up ingesters will lower the number of tenants per ingester. However, the effect of this change will be visible only after `-blocks-storage.tsdb.close-idle-tsdb-timeout` period so you may have to temporarily increase the limit
34
86
35
87
### CortexRequestLatency
36
88
First establish if the alert is for read or write latency. The alert should say.
@@ -220,11 +272,21 @@ Same as [`CortexCompactorHasNotSuccessfullyCleanedUpBlocks`](#CortexCompactorHas
220
272
This alert fires when a Cortex compactor is not uploading any compacted blocks to the storage since a long time.
221
273
222
274
How to **investigate**:
223
-
- If the alert `CortexCompactorHasNotSuccessfullyRun` or `CortexCompactorHasNotSuccessfullyRunSinceStart` have fired as well, then investigate that issue first
275
+
- If the alert `CortexCompactorHasNotSuccessfullyRunCompaction` has fired as well, then investigate that issue first
224
276
- If the alert `CortexIngesterHasNotShippedBlocks` or `CortexIngesterHasNotShippedBlocksSinceStart` have fired as well, then investigate that issue first
225
277
- Ensure ingesters are successfully shipping blocks to the storage
This alert fires if the compactor is not able to successfully compact all discovered compactable blocks (across all tenants).
283
+
284
+
When this alert fires, the compactor may still have successfully compacted some blocks but, for some reason, other blocks compaction is consistently failing. A common case is when the compactor is trying to compact a corrupted block for a single tenant: in this case the compaction of blocks for other tenants is still working, but compaction for the affected tenant is blocked by the corrupted block.
285
+
286
+
How to **investigate**:
287
+
- Look for any error in the compactor logs
288
+
- Corruption: [`not healthy index found`](#compactor-is-failing-because-of-not-healthy-index-found)
289
+
228
290
#### Compactor is failing because of `not healthy index found`
229
291
230
292
The compactor may fail to compact blocks due a corrupted block index found in one of the source blocks:
@@ -249,18 +311,6 @@ To rename a block stored on GCS you can use the `gsutil` CLI:
This alert fires when the bucket index, for a given tenant, is not updated since a long time. The bucket index is expected to be periodically updated by the compactor and is used by queriers and store-gateways to get an almost-updated view over the bucket store.
@@ -317,13 +367,33 @@ _TODO: this playbook has not been written yet._
317
367
318
368
_TODO: this playbook has not been written yet._
319
369
320
-
### CortexInconsistentConfig
370
+
### CortexInconsistentRuntimeConfig
321
371
322
-
_TODO: this playbook has not been written yet._
372
+
This alert fires if multiple replicas of the same Cortex service are using a different runtime config for a longer period of time.
373
+
374
+
The Cortex runtime config is a config file which gets live reloaded by Cortex at runtime. In order for Cortex to work properly, the loaded config is expected to be the exact same across multiple replicas of the same Cortex service (eg. distributors, ingesters, ...). When the config changes, there may be short periods of time during which some replicas have loaded the new config and others are still running on the previous one, but it shouldn't last for more than few minutes.
375
+
376
+
How to **investigate**:
377
+
- Check how many different config file versions (hashes) are reported
378
+
```
379
+
count by (sha256) (cortex_runtime_config_hash{namespace="<namespace>"})
380
+
```
381
+
- Check which replicas are running a different version
- Check if the runtime config has been updated on the affected replicas' filesystem. Check `-runtime-config.file` command line argument to find the location of the file.
386
+
- Check the affected replicas logs and look for any error loading the runtime config
323
387
324
388
### CortexBadRuntimeConfig
325
389
326
-
_TODO: this playbook has not been written yet._
390
+
This alert fires if Cortex is unable to reload the runtime config.
391
+
392
+
This typically means an invalid runtime config was deployed. Cortex keeps running with the previous (valid) version of the runtime config; running Cortex replicas and the system availability shouldn't be affected, but new replicas won't be able to startup until the runtime config is fixed.
393
+
394
+
How to **investigate**:
395
+
- Check the latest runtime config update (it's likely to be broken)
396
+
- Check Cortex logs to get more details about what's wrong with the config
327
397
328
398
### CortexQuerierCapacityFull
329
399
@@ -347,15 +417,15 @@ _TODO: this playbook has not been written yet._
347
417
348
418
### CortexCheckpointCreationFailed
349
419
350
-
_TODO: this playbook has not been written yet._
420
+
_This alert applies to Cortex chunks storage only._
351
421
352
422
### CortexCheckpointDeletionFailed
353
423
354
-
_TODO: this playbook has not been written yet._
424
+
_This alert applies to Cortex chunks storage only._
355
425
356
426
### CortexProvisioningMemcachedTooSmall
357
427
358
-
_TODO: this playbook has not been written yet._
428
+
_This alert applies to Cortex chunks storage only._
0 commit comments