You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+8-3Lines changed: 8 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -77,9 +77,12 @@ MemoryCache is perfectly servicable. But in some situations, it can be a bottlen
77
77
78
78
## Lru Hit rate
79
79
80
-
Analysis of 1 million samples of a Zipfan distributionwith different*s*values. There are 50,000 total keys, and the test was run with the cache configured to different sizes expressed as a percentage of the total key space.
80
+
The charts below show the relative hit rate of classic LRU vs Concurrent LRU on a [Zipfian distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) of input keys, with parameter*s*= 0.5 and *s* = 0.86 respectively. If there are *N* items, the probability of accessing an item numbered *i* or less is (*i* / *N*)^*s*.
81
81
82
-
When the cache is small, below 15% of the total key space, ConcurrentLru significantly outperforms ClassicLru.
82
+
Here *N* = 50000, and we take 1 million sample keys. The hit rate is the number of times we get a cache hit divided by 1 million.
83
+
This test was repeated with the cache configured to different sizes expressed as a percentage *N* (e.g. 10% would be a cache with a capacity 5000).
84
+
85
+
When the cache is small, below 15% of the total key space, ConcurrentLru outperforms ClassicLru.
83
86
84
87
<table>
85
88
<tr>
@@ -94,6 +97,8 @@ When the cache is small, below 15% of the total key space, ConcurrentLru signifi
94
97
95
98
## Lru Benchmarks
96
99
100
+
In the benchmarks, a cache miss is essentially free. These tests exist purely to compare the raw execution speed of the cache code. In a real setting, where a cache miss is presumably quite expensive, the relative overhead of the cache will be very small.
101
+
97
102
Benchmarks are based on BenchmarkDotNet, so are single threaded. The ConcurrentLru family of classes can outperform ClassicLru in multithreaded workloads.
Take 1000 samples of a [Zipfan distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) over a set of keys of size *N* and use the keys to lookup values in the cache. If there are *N* items, the probability of accessing an item numbered *i* or less is (*i* / *N*)^*s*.
116
+
Take 1000 samples of a [Zipfian distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) over a set of keys of size *N* and use the keys to lookup values in the cache. If there are *N* items, the probability of accessing an item numbered *i* or less is (*i* / *N*)^*s*.
0 commit comments