Skip to content

Atomic GetOrAdd

Alex Peck edited this page Sep 18, 2022 · 7 revisions

Use a cache builder to create a cache with atomic GetOrAdd.

Default cache behavior matches ConcurrentDictionary

By default, both ConcurrentLru and ConcurrentLfu behave the same as ConcurrentDictionary. Modifications to the internal hash map are protected by locks. However, the valueFactory delegate is called outside the locks to avoid the problems that can arise from executing unknown code under a lock.

Since a key/value can be inserted by another thread while valueFactory is generating a value, you cannot trust that just because valueFactory executed, its produced value will be inserted into the dictionary and returned. When calling GetOrAdd simultaneously on different threads, valueFactory may be called multiple times, but only one key/value pair will be added to the dictionary. The last write wins.

Under load, this can result in a cascading failure known as a cache stampede.

Atomic GetOrAdd

To mitigate stampedes, caches can be configured with atomic GetOrAdd. When multiple threads attempt to insert a value for the same key, the first caller will invoke valueFactory and store the value. Subsequent callers will be blocked and wait until the new value is generated.

Atomic GetOrAdd is enabled by creating a cache using a cache builder.

Clone this wiki locally