Skip to content

Commit 68c24d5

Browse files
authored
fix(cache): make updating the max cost of posting cache work again (#9526)
Updating the max cost of the posting list cache is not wired up. It just calls an empty function. This PR adds a method to `MemoryLayer`, that allows updating the max cost of its cache. The old function is deleted and the function call is replaced with a call to the new method. Updating the MaxCost is critical because it is the mechanism that actually enforces the memory limit in the underlying cache (Ristretto). Without this PR, changing the cacheMB setting was effectively a "placebo"—the number changed in the configuration, but the cache's behavior remained exactly the same. Implications: 1. "Cost" equals "Memory Usage" In Dgraph's usage of Ristretto (the cache library), the "cost" of an item is calculated to represent its approximate memory size in bytes. Therefore, MaxCost is the memory capacity of the cache. Before the fix: The UpdateMaxCost function was empty. If you changed cacheMB from 1GB to 8GB via the API, the configuration variable updated, but the cache's internal capacity (MaxCost) stayed at 1GB. After the fix: The new cacheMB value is converted to bytes and passed to ml.cache.data.UpdateMaxCost(), instantly resizing the cache's capacity. 2. Runtime Memory Management (No Restarts) This feature allows operators to tune memory usage dynamically without downtime. This is vital for two main scenarios: Relieving Memory Pressure (Scaling Down): If a Dgraph Alpha is running close to its memory limit and risking an OOM (Out of Memory) crash, an operator can lower cacheMB. Without this fix: The cache would ignore the reduction and keep holding onto memory, potentially leading to a crash. With this fix: The cache immediately evicts items to drop its usage below the new limit, stabilizing the server. Improving Performance (Scaling Up): If a server has spare RAM and query latency is high due to cache misses, an operator can increase cacheMB. Without this fix: The cache would not grow to use the extra memory, and performance would not improve. With this fix: The cache expands, storing more posting lists and improving query speeds.
1 parent 8c2875f commit 68c24d5

File tree

3 files changed

+10
-4
lines changed

3 files changed

+10
-4
lines changed

posting/lists.go

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -46,9 +46,6 @@ func SetEnabledDetailedMetrics(enableMetrics bool) {
4646
EnableDetailedMetrics = enableMetrics
4747
}
4848

49-
func UpdateMaxCost(maxCost int64) {
50-
}
51-
5249
// Cleanup waits until the closer has finished processing.
5350
func Cleanup() {
5451
closer.SignalAndWait()

posting/mvcc.go

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -405,6 +405,13 @@ func (ml *MemoryLayer) del(key []byte) {
405405
ml.cache.del(key)
406406
}
407407

408+
func (ml *MemoryLayer) UpdateMaxCost(maxCost int64) {
409+
if ml.cache == nil || ml.cache.data == nil {
410+
return
411+
}
412+
ml.cache.data.UpdateMaxCost(maxCost)
413+
}
414+
408415
type IterateDiskArgs struct {
409416
Prefix []byte
410417
Prefetch bool

worker/worker.go

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,9 @@ func UpdateCacheMb(memoryMB int64) error {
150150
blockCacheSize := (cachePercent[1] * (memoryMB << 20)) / 100
151151
indexCacheSize := (cachePercent[2] * (memoryMB << 20)) / 100
152152

153-
posting.UpdateMaxCost(plCacheSize)
153+
if posting.MemLayerInstance != nil {
154+
posting.MemLayerInstance.UpdateMaxCost(plCacheSize)
155+
}
154156
if _, err := pstore.CacheMaxCost(badger.BlockCache, blockCacheSize); err != nil {
155157
return errors.Wrapf(err, "cannot update block cache size")
156158
}

0 commit comments

Comments
 (0)