Skip to content

Throughput

Alex Peck edited this page Sep 11, 2022 · 6 revisions

Measure throughput for four scenarios:

  1. Read (100% cache hits)
  2. Read + Write
  3. Update
  4. Evict (100% cache miss)

The data below was collected on an Intel Xeon W-2133 CPU 3.60GHz, 1 CPU, 12 logical and 6 physical cores, running on .NET 6.0. The MemoryCache here is from Microsoft.Extensions.Caching.Memory is version 6.0.1.

Read throughput

In this test, we generate 2000 samples of 500 keys with a Zipfian distribution (s = 0.86). Caches have size 500. From N concurrent threads, fetch the sample keys in sequence (each thread is using the same input keys). The principal scalability limit in concurrent applications is the exclusive resource lock. As the number of threads increases, ConcurrentLru and ConcurrentLfu significantly outperform both MemoryCache and an LRU implemented with a short lived exclusive lock used to synchronize the linked list data structure.

image

Read+Write throughput

Similar to read, we generate 2000 samples of 500 keys with a Zipfian distribution (s = 0.86). But now caches have size 50, meaning the cache can only hold 10% of all of the values. From N concurrent threads, fetch the sample keys in sequence (each thread is using the same input keys). Some of the requests will be cache misses, and the cache will need to evict items to preserve bounded size.

image

Update throughput

image

Eviction throughput

image

Clone this wiki locally