-
Notifications
You must be signed in to change notification settings - Fork 39
Atomic GetOrAdd
Use a cache builder to create a cache with atomic GetOrAdd.
By default, both ConcurrentLru and ConcurrentLfu behave the same as ConcurrentDictionary. Modifications to the internal hash map are protected by locks. However, the valueFactory delegate is called outside the locks to avoid the problems that can arise from executing unknown code under a lock.
Since a key/value can be inserted by another thread while valueFactory is generating a value, you cannot trust that just because valueFactory executed, its produced value will be inserted into the dictionary and returned. When calling GetOrAdd simultaneously on different threads, valueFactory may be called multiple times, but only one key/value pair will be added to the dictionary. The last write wins.
Under load, this can result in a cascading failure known as a cache stampede.
To mitigate stampedes, caches can be configured with atomic GetOrAdd. When multiple threads attempt to insert a value for the same key, the first caller will invoke valueFactory and store the value. Subsequent callers will be blocked and wait until the new value is generated.
Atomic GetOrAdd is enabled by creating a cache using a cache builder.