|
1 | 1 | # ⚡ BitFaster.Caching |
2 | 2 |
|
3 | | -High performance, thread safe in-memory caching primitives for .NET. |
| 3 | +High performance, thread-safe in-memory caching primitives for .NET. |
4 | 4 |
|
5 | 5 | LRU implementations are intended as an alternative to the System.Runtime.Caching.MemoryCache family of classes (e.g. HttpRuntime.Cache, System.Web.Caching et. al.). MemoryCache makes heap allocations when the native object key is not type string, and does not offer the fastest possible performance. |
6 | 6 |
|
7 | 7 | [](https://badge.fury.io/nu/BitFaster.Caching) |
8 | 8 |
|
9 | | -# Overview |
| 9 | +# Installing via NuGet |
| 10 | +`Install-Package BitFaster.Caching` |
| 11 | + |
| 12 | +# Caching primitives |
10 | 13 |
|
11 | 14 | | Class | Description | |
12 | 15 | |:-------|:---------| |
13 | 16 | | ClassicLru | Bounded size LRU based with strict ordering.<br><br>Use if ordering is important, but data structures are synchronized with a lock which limits scalability. | |
14 | 17 | | ConcurrentLru | Bounded size pseudo LRU.<br><br>For when you want a ConcurrentDictionary, but with bounded size. Maintains psuedo order, but is faster than ClassicLru and not prone to lock contention. | |
15 | 18 | | ConcurrentTlru | Bounded size pseudo LRU, items have TTL.<br><br>Same as ConcurrentLru, but with a [time aware least recently used (TLRU)](https://en.wikipedia.org/wiki/Cache_replacement_policies#Time_aware_least_recently_used_(TLRU)) eviction policy. | |
16 | | -| FastConcurrentLru/FastConcurrentTLru | Same as ConcurrentLru/ConcurrentTLru, but with hit counting logic eliminated making them between 10 and 30% faster. | |
| 19 | +| FastConcurrentLru/FastConcurrentTLru | Same as ConcurrentLru/ConcurrentTLru, but with hit counting logic eliminated making them 10-30% faster. | |
17 | 20 | | SingletonCache | Cache singletons by key. Discard when no longer in use. <br><br> For example, cache a SemaphoreSlim per user, where user population is large, but active user count is low. | |
18 | 21 | | Scoped<IDisposable> | A threadsafe wrapper for storing IDisposable objects in a cache that may dispose and invalidate them. The scope keeps the object alive until all callers have finished. | |
19 | 22 |
|
| 23 | +# Usage |
| 24 | + |
| 25 | +## Caching IDisposable objects |
| 26 | + |
| 27 | +All cache classes in BitFaster.Caching own the lifetime of cached values, and will automatically dispose values when they are evicted. |
| 28 | + |
| 29 | +To avoid races using objects after they have been disposed by the cache, wrap them with `Scoped`. The call to `CreateLifetime` creates a `Lifetime` that guarantees the scoped object will not be disposed until the lifetime is disposed. `Scoped` is thread safe, and lifetimes are valid for concurrent callers. |
| 30 | + |
| 31 | +```csharp |
| 32 | +var lru = new ConcurrentLru<int, Scoped<SomeDisposable>>(2, 9, EqualityComparer<int>.Default); |
| 33 | +var valueFactory = new SomeDisposableValueFactory(); |
| 34 | + |
| 35 | +using (var lifetime = lru.GetOrAdd(1, valueFactory.Create).CreateLifetime()) |
| 36 | +{ |
| 37 | + // lifetime.Value is guaranteed to be alive until the lifetime is disposed |
| 38 | +} |
| 39 | +``` |
| 40 | + |
| 41 | +## Caching Singletons by key |
| 42 | + |
| 43 | +`SingletonCache` enables mapping every key to a single instance of a value, and keeping the value alive only while it is in use. This is useful when the total number of keys is large, but few will be in use at any moment. |
| 44 | + |
| 45 | +The example below shows how to implement exclusive Url access using a lock object per Url. |
| 46 | + |
| 47 | +```csharp |
| 48 | + |
| 49 | +var urlLocks = new SingletonCache<Url, object>(); |
| 50 | + |
| 51 | +Url url = new Url("https://foo.com"); |
| 52 | + |
| 53 | +using (var handle = urlLocks.Acquire(url)) |
| 54 | +{ |
| 55 | + lock (handle.Value) |
| 56 | + { |
| 57 | + // exclusive url access |
| 58 | + } |
| 59 | +} |
| 60 | + |
| 61 | +``` |
| 62 | + |
| 63 | + |
20 | 64 | # Performance |
21 | 65 |
|
22 | 66 | ## Lru Benchmarks |
|
0 commit comments