Skip to content

Commit 7fda9ec

Browse files
authored
Update README.md
1 parent 242f526 commit 7fda9ec

File tree

1 file changed

+17
-0
lines changed

1 file changed

+17
-0
lines changed

README.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,23 @@ MemoryCache is perfectly servicable. But in some situations, it can be a bottlen
7575

7676
# Performance
7777

78+
## Lru Hit rate
79+
80+
Analysis of 1 million samples of a Zipfan distribution with different *s* values. There are 50,000 total keys, and the test was run with the cache configured to different sizes expressed as a percentage of the total key space.
81+
82+
When the cache is small, below 15% of the total key space, ConcurrentLru significantly outperforms ClassicLru.
83+
84+
<table>
85+
<tr>
86+
<td>
87+
<img src="https://user-images.githubusercontent.com/12851828/84707621-e2a62480-af13-11ea-91e7-726911bce162.png" width="400"/>
88+
</td>
89+
<td>
90+
<img src="https://user-images.githubusercontent.com/12851828/84707663-f81b4e80-af13-11ea-96d4-1ba71444d333.png" width="400"/>
91+
</td>
92+
</tr>
93+
</table>
94+
7895
## Lru Benchmarks
7996

8097
Benchmarks are based on BenchmarkDotNet, so are single threaded. The ConcurrentLru family of classes can outperform ClassicLru in multithreaded workloads.

0 commit comments

Comments
 (0)