Skip to content

Commit 2c591d6

Browse files
authored
Update README.md
1 parent 7fda9ec commit 2c591d6

File tree

1 file changed

+8
-3
lines changed

1 file changed

+8
-3
lines changed

README.md

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -77,9 +77,12 @@ MemoryCache is perfectly servicable. But in some situations, it can be a bottlen
7777

7878
## Lru Hit rate
7979

80-
Analysis of 1 million samples of a Zipfan distribution with different *s* values. There are 50,000 total keys, and the test was run with the cache configured to different sizes expressed as a percentage of the total key space.
80+
The charts below show the relative hit rate of classic LRU vs Concurrent LRU on a [Zipfian distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) of input keys, with parameter *s* = 0.5 and *s* = 0.86 respectively. If there are *N* items, the probability of accessing an item numbered *i* or less is (*i* / *N*)^*s*.
8181

82-
When the cache is small, below 15% of the total key space, ConcurrentLru significantly outperforms ClassicLru.
82+
Here *N* = 50000, and we take 1 million sample keys. The hit rate is the number of times we get a cache hit divided by 1 million.
83+
This test was repeated with the cache configured to different sizes expressed as a percentage *N* (e.g. 10% would be a cache with a capacity 5000).
84+
85+
When the cache is small, below 15% of the total key space, ConcurrentLru outperforms ClassicLru.
8386

8487
<table>
8588
<tr>
@@ -94,6 +97,8 @@ When the cache is small, below 15% of the total key space, ConcurrentLru signifi
9497

9598
## Lru Benchmarks
9699

100+
In the benchmarks, a cache miss is essentially free. These tests exist purely to compare the raw execution speed of the cache code. In a real setting, where a cache miss is presumably quite expensive, the relative overhead of the cache will be very small.
101+
97102
Benchmarks are based on BenchmarkDotNet, so are single threaded. The ConcurrentLru family of classes can outperform ClassicLru in multithreaded workloads.
98103

99104
~~~
@@ -108,7 +113,7 @@ Job=RyuJitX64 Jit=RyuJit Platform=X64
108113

109114
### Lookup keys with a Zipf distribution
110115

111-
Take 1000 samples of a [Zipfan distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) over a set of keys of size *N* and use the keys to lookup values in the cache. If there are *N* items, the probability of accessing an item numbered *i* or less is (*i* / *N*)^*s*.
116+
Take 1000 samples of a [Zipfian distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) over a set of keys of size *N* and use the keys to lookup values in the cache. If there are *N* items, the probability of accessing an item numbered *i* or less is (*i* / *N*)^*s*.
112117

113118
*s* = 0.86 (yields approx 80/20 distribution)<br>
114119
*N* = 500

0 commit comments

Comments
 (0)