cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Stupp (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-10855) Use Caffeine (W-TinyLFU) for on-heap caches
Date Fri, 25 Dec 2015 22:36:49 GMT


Robert Stupp commented on CASSANDRA-10855:

[Ran CI|] against (rebased) branch and the results
look good to me (i.e. no regression).

Unfortunately the cstar perf runs ([trades-fwd-lcs-nolz4|]
and [cassci regression test r/w|])
show that using Caffeine for the key cache slightly _degrades_ performance in terms of throughput
and latencies. Some percentiles (mostly max latencies) are slightly better, but the overall
result is that performance degrades. The key-cache hit rate is slightly better with Caffeine
(trades-fwd-lcs-nolz4 showing slightly more than 10% hit rate w/ Caffeine vs. slightly less
than 10% w/o Caffeine).

_trades-fwd-lcs-nolz4_ uses somewhat bigger partitions and fills the key cache completely.
_regression r/w_ uses small partitions and just uses roughly 10% of the key cache.
perf runs used a 3-node C* cluster (“blade_11_b”) - each node having 2 6-code Xeon CPUs
having a total of 64GB RAM.

>From a _really quick & brief_ view at the Caffeine source, I *suspect* that the worse
numbers are caused by the spinning loops. Also padding fields, which can behave completely
different on NUMA than on singe-CPU systems, may have some bad influence in this test.

> Use Caffeine (W-TinyLFU) for on-heap caches
> -------------------------------------------
>                 Key: CASSANDRA-10855
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Ben Manes
>              Labels: performance
> Cassandra currently uses [ConcurrentLinkedHashMap|]
for performance critical caches (key, counter) and Guava's cache for non-critical (auth, metrics,
security). All of these usages have been replaced by [Caffeine|],
written by the author of the previously mentioned libraries.
> The primary incentive is to switch from LRU policy to W-TinyLFU, which provides [near
optimal|] hit rates. It performs particularly
well in database and search traces, is scan resistant, and as adds a very small time/space
overhead to LRU.
> Secondarily, Guava's caches never obtained similar [performance|]
to CLHM due to some optimizations not being ported over. This change results in faster reads
and not creating garbage as a side-effect.

This message was sent by Atlassian JIRA

View raw message