cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ben Manes (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-5661) Discard pooled readers for cold data
Date Sun, 07 Jul 2013 10:29:49 GMT


Ben Manes commented on CASSANDRA-5661:

I solved the LRU problem years ago, which gave you CLHM and Guava's Cache. It scales very
well without degrading due to LRU management under higher thread count, limited primarily
by the hash table usage. Previous approaches didn't scale to 4-8 threads, but 32+ is limited
by the chosen hash table design.

In neither approaches will there be significant contention or overhead. The difference is
about the level of granularity to bound the resources by and how to evict them.

You seem to be focusing on tuning parameters, minute details, etc. for a class written in
a few evenings as a favor, knowing that those things are trivial to change. There's not much
of a point debating it with me as I don't care and have no stake or interest in what is decided.
Especially when you're comparing it against a simplistic usage relying on another class I
wrote much of, Guava's. In the end something I wrote will be used to solve this bug. ;)

> Discard pooled readers for cold data
> ------------------------------------
>                 Key: CASSANDRA-5661
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.2.1
>            Reporter: Jonathan Ellis
>            Assignee: Pavel Yaskevich
>             Fix For: 1.2.7
>         Attachments: CASSANDRA-5661.patch, DominatorTree.png, Histogram.png
> Reader pooling was introduced in CASSANDRA-4942 but pooled RandomAccessReaders are never
cleaned up until the SSTableReader is closed.  So memory use is "the worst case simultaneous
RAR we had open for this file, forever."
> We should introduce a global limit on how much memory to use for RAR, and evict old ones.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:

View raw message