Skip to content
This repository was archived by the owner on Apr 28, 2025. It is now read-only.

Commit f8b162b

Browse files
committed
Address review feedback
Signed-off-by: Marco Pracucci <marco@pracucci.com>
1 parent 1e56681 commit f8b162b

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

cortex-mixin/alerts/alerts.libsonnet

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -479,7 +479,7 @@
479479
},
480480
annotations: {
481481
message: |||
482-
Ingesters in {{ $labels.namespace }} have an high samples/sec rate.
482+
Ingesters in {{ $labels.namespace }} ingest too many samples per second.
483483
|||,
484484
},
485485
},

cortex-mixin/docs/playbooks.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -457,16 +457,16 @@ How it **works**:
457457
- Cortex ingesters are a stateful service
458458
- Having 2+ ingesters `OOMKilled` may cause a cluster outage
459459
- Ingester memory baseline usage is primarily influenced by memory allocated by the process (mostly go heap) and mmap-ed files (used by TSDB)
460-
- Ingester memory short spikes are primarily influenced by queries
461-
- A pod gets `OOMKilled` once it's working set memory reaches the configured limit, so it's important to prevent ingesters memory utilization (working set memory) from getting close to the limit (we need to keep at least 30% room for spikes due to queries)
460+
- Ingester memory short spikes are primarily influenced by queries and TSDB head compaction into new blocks (occurring every 2h)
461+
- A pod gets `OOMKilled` once its working set memory reaches the configured limit, so it's important to prevent ingesters memory utilization (working set memory) from getting close to the limit (we need to keep at least 30% room for spikes due to queries)
462462
463463
How to **fix**:
464464
- Check if the issue occurs only for few ingesters. If so:
465465
- Restart affected ingesters 1 by 1 (proceed with the next one once the previous pod has restarted and it's Ready)
466466
```
467467
kubectl -n <namespace> delete pod ingester-XXX
468468
```
469-
- Restarting an ingester typically reduces the memory allocated by mmap-ed files. Such memory could be reallocated again, but may let you gain more time while working on a longer term solution
469+
- Restarting an ingester typically reduces the memory allocated by mmap-ed files. After the restart, ingester may allocate this memory again over time, but it may give more time while working on a longer term solution
470470
- Check the `Cortex / Writes Resources` dashboard to see if the number of series per ingester is above the target (1.5M). If so:
471471
- Scale up ingesters
472472
- Memory is expected to be reclaimed at the next TSDB head compaction (occurring every 2h)

0 commit comments

Comments
 (0)