Skip to content

Commit 15d5c05

Browse files
authored
Update README.md
1 parent 1cf44aa commit 15d5c05

File tree

1 file changed

+9
-6
lines changed

1 file changed

+9
-6
lines changed

README.md

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -11,17 +11,19 @@ High performance, thread-safe in-memory caching primitives for .NET.
1111

1212
| Class | Description |
1313
|:-------|:---------|
14-
| ConcurrentLru | Bounded size pseudo LRU.<br><br>A drop in replacement for ConcurrentDictionary, but with bounded size. Maintains psuedo order, with better hit rate than a pure Lru and not prone to lock contention. |
15-
| ConcurrentTlru | Bounded size pseudo LRU, items have TTL.<br><br>Same as ConcurrentLru, but with a [time aware least recently used (TLRU)](https://en.wikipedia.org/wiki/Cache_replacement_policies#Time_aware_least_recently_used_(TLRU)) eviction policy. If the values generated for each key can change over time, ConcurrentTlru is eventually consistent where the inconsistency window = TTL. |
16-
| SingletonCache | Cache singleton objects by key. Discard when no longer in use. Threadsafe guarantee of single instance. |
17-
| Scoped<IDisposable> | A threadsafe wrapper for storing IDisposable objects in a cache that may dispose and invalidate them. The scope keeps the object alive until all callers have finished. |
14+
| ConcurrentLru | Represents a thread-safe bounded size pseudo LRU.<br><br>A drop in replacement for ConcurrentDictionary, but with bounded size. Maintains psuedo order, with better hit rate than a pure Lru and not prone to lock contention. |
15+
| ConcurrentTLru | Represents a thread-safe bounded size pseudo TLRU, items have TTL.<br><br>As ConcurrentLru, but with a [time aware least recently used (TLRU)](https://en.wikipedia.org/wiki/Cache_replacement_policies#Time_aware_least_recently_used_(TLRU)) eviction policy. If the values generated for each key can change over time, ConcurrentTLru is eventually consistent where the inconsistency window = TTL. |
16+
| SingletonCache | Represents a thread-safe cache of key value pairs, which guarantees a single instance of each value. Values are discarded immediately when no longer in use to conserve memory. |
17+
| Scoped<IDisposable> | Represents a thread-safe wrapper for storing IDisposable objects in a cache that may dispose and invalidate them. The scope keeps the object alive until all callers have finished. |
1818

1919
# Usage
2020

2121
## ConcurrentLru/ConcurrentTLru
2222

2323
`ConcurrentLru` and `ConcurrentTLru` are intended as a drop in replacement for `ConcurrentDictionary`, and a much faster alternative to the `System.Runtime.Caching.MemoryCache` family of classes (e.g. `HttpRuntime.Cache`, `System.Web.Caching` etc).
2424

25+
Choose a capacity and use just like ConcurrentDictionary:
26+
2527
```csharp
2628
int capacity = 666;
2729
var lru = new ConcurrentLru<int, SomeItem>(capacity);
@@ -84,9 +86,10 @@ using (var lifetime = urlLocks.Acquire(url))
8486
MemoryCache is perfectly servicable. But in some situations, it can be a bottleneck.
8587

8688
- Makes heap allocations when the native object key is not type string.
89+
- Is not 'scan' resistant, fetching all keys will load everything into memory.
8790
- Does not scale well with concurrent writes.
8891
- Executes code for perf counters that can't be disabled
89-
- Uses an heuristic to estimate memory used, and the 'trim' process may remove useful items. If many items are added quickly, runaway is a problem.
92+
- Uses an heuristic to estimate memory used, and the 'trim' process may remove useful items.
9093

9194
# Performance
9295

@@ -150,7 +153,7 @@ Cache size = *N* / 10 (so we can cache 10% of the total set). ConcurrentLru has
150153
In this test the same items are fetched repeatedly, no items are evicted. Representative of high hit rate scenario, when there are a low number of hot items.
151154

152155
- ConcurrentLru family does not move items in the queues, it is just marking as accessed for pure cache hits.
153-
- ClassicLru must maintain item order, and is internally splicing the fetched item to the head of the linked list.
156+
- Classic Lru must maintain item order, and is internally splicing the fetched item to the head of the linked list.
154157
- MemoryCache and ConcurrentDictionary represent a pure lookup. This is the best case scenario for MemoryCache, since the lookup key is a string (if the key were a Guid, using MemoryCache adds string conversion overhead).
155158

156159
FastConcurrentLru does not allocate and is approximately 10x faster than MemoryCache.

0 commit comments

Comments
 (0)