cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pavel Yaskevich (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-5661) Discard pooled readers for cold data
Date Wed, 26 Jun 2013 03:55:20 GMT


Pavel Yaskevich commented on CASSANDRA-5661:

It seems like we are trying to address different problems of what is in description to the
ticket and what Jeremia pointed out. Let me describe what I'm trying to solve: When reading
from multiple SSTables for a while and then pattern changes and load is switched to the different
subset of SSTables, previous [C]RAR instances are returned to the appropriate queues and stuck
there until each SSTable is deallocated (by compaction) which creates memory pressure on stale
workloads or when compaction is running behind.

LRU could solve that problem when we have limit on total amount of memory that we can use
so it would start kicking in only after we reach that limit and create a jitter in the queue
and processing latencies. 

What I propose adds minimal booking overhead per queue and expires items quicker than LRU
and more precise, also I'm not really worried about max number of items in the queue per SSTable
as it's organically limited to the number of concurrent readers.
> Discard pooled readers for cold data
> ------------------------------------
>                 Key: CASSANDRA-5661
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.2.1
>            Reporter: Jonathan Ellis
>            Assignee: Pavel Yaskevich
>             Fix For: 1.2.7
> Reader pooling was introduced in CASSANDRA-4942 but pooled RandomAccessReaders are never
cleaned up until the SSTableReader is closed.  So memory use is "the worst case simultaneous
RAR we had open for this file, forever."
> We should introduce a global limit on how much memory to use for RAR, and evict old ones.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:

View raw message