You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+9-6Lines changed: 9 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,17 +11,19 @@ High performance, thread-safe in-memory caching primitives for .NET.
11
11
12
12
| Class | Description |
13
13
|:-------|:---------|
14
-
| ConcurrentLru |Bounded size pseudo LRU.<br><br>A drop in replacement for ConcurrentDictionary, but with bounded size. Maintains psuedo order, with better hit rate than a pure Lru and not prone to lock contention. |
15
-
|ConcurrentTlru|Bounded size pseudo LRU, items have TTL.<br><br>Same as ConcurrentLru, but with a [time aware least recently used (TLRU)](https://en.wikipedia.org/wiki/Cache_replacement_policies#Time_aware_least_recently_used_(TLRU)) eviction policy. If the values generated for each key can change over time, ConcurrentTlru is eventually consistent where the inconsistency window = TTL. |
16
-
| SingletonCache |Cache singleton objects by key. Discard when no longer in use. Threadsafe guarantee of single instance. |
17
-
| Scoped<IDisposable> |A threadsafe wrapper for storing IDisposable objects in a cache that may dispose and invalidate them. The scope keeps the object alive until all callers have finished. |
14
+
| ConcurrentLru |Represents a thread-safe bounded size pseudo LRU.<br><br>A drop in replacement for ConcurrentDictionary, but with bounded size. Maintains psuedo order, with better hit rate than a pure Lru and not prone to lock contention. |
15
+
|ConcurrentTLru|Represents a thread-safe bounded size pseudo TLRU, items have TTL.<br><br>As ConcurrentLru, but with a [time aware least recently used (TLRU)](https://en.wikipedia.org/wiki/Cache_replacement_policies#Time_aware_least_recently_used_(TLRU)) eviction policy. If the values generated for each key can change over time, ConcurrentTLru is eventually consistent where the inconsistency window = TTL. |
16
+
| SingletonCache |Represents a thread-safe cache of key value pairs, which guarantees a single instance of each value. Values are discarded immediately when no longer in use to conserve memory. |
17
+
| Scoped<IDisposable> |Represents a thread-safe wrapper for storing IDisposable objects in a cache that may dispose and invalidate them. The scope keeps the object alive until all callers have finished. |
18
18
19
19
# Usage
20
20
21
21
## ConcurrentLru/ConcurrentTLru
22
22
23
23
`ConcurrentLru` and `ConcurrentTLru` are intended as a drop in replacement for `ConcurrentDictionary`, and a much faster alternative to the `System.Runtime.Caching.MemoryCache` family of classes (e.g. `HttpRuntime.Cache`, `System.Web.Caching` etc).
24
24
25
+
Choose a capacity and use just like ConcurrentDictionary:
26
+
25
27
```csharp
26
28
intcapacity=666;
27
29
varlru=newConcurrentLru<int, SomeItem>(capacity);
@@ -84,9 +86,10 @@ using (var lifetime = urlLocks.Acquire(url))
84
86
MemoryCache is perfectly servicable. But in some situations, it can be a bottleneck.
85
87
86
88
- Makes heap allocations when the native object key is not type string.
89
+
- Is not 'scan' resistant, fetching all keys will load everything into memory.
87
90
- Does not scale well with concurrent writes.
88
91
- Executes code for perf counters that can't be disabled
89
-
- Uses an heuristic to estimate memory used, and the 'trim' process may remove useful items. If many items are added quickly, runaway is a problem.
92
+
- Uses an heuristic to estimate memory used, and the 'trim' process may remove useful items.
90
93
91
94
# Performance
92
95
@@ -150,7 +153,7 @@ Cache size = *N* / 10 (so we can cache 10% of the total set). ConcurrentLru has
150
153
In this test the same items are fetched repeatedly, no items are evicted. Representative of high hit rate scenario, when there are a low number of hot items.
151
154
152
155
- ConcurrentLru family does not move items in the queues, it is just marking as accessed for pure cache hits.
153
-
-ClassicLru must maintain item order, and is internally splicing the fetched item to the head of the linked list.
156
+
-Classic Lru must maintain item order, and is internally splicing the fetched item to the head of the linked list.
154
157
- MemoryCache and ConcurrentDictionary represent a pure lookup. This is the best case scenario for MemoryCache, since the lookup key is a string (if the key were a Guid, using MemoryCache adds string conversion overhead).
155
158
156
159
FastConcurrentLru does not allocate and is approximately 10x faster than MemoryCache.
0 commit comments