cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Branimir Lambov (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-5863) In process (uncompressed) page cache
Date Tue, 26 Apr 2016 08:16:13 GMT


Branimir Lambov commented on CASSANDRA-5863:

bq. Let me address the things you mentioned - ...

You seem to be describing exactly what is currently implemented. The sstable metadata is part
of the data rebufferers work with, in that sense they do work on the sstable level. {{BufferlessRebufferer}}
is the backend, {{Rebufferer}} is the front. What fills the cache and what use data from it
need different interfaces, and the primary difference is the buffer management. In the compressed
case the cache provides shared decompressed buffers and does not give anyone access to the
underlying (mmapped or not) file or buffers. RAR does not know anything or care about the
underlying sstable format, and apart from the chunk size neither does {{ReaderCache}}.

Perhaps the only not-yet-addressed point is the granularity of the cache, if I understand
you correctly you are describing per-file/sstable caches: do you mean a specific space allocation
for each file? If so, how do you propose to manage splitting the space among the individual
caches? If not (i.e. per-file maps with shared eviction strategy), this is a sensible option
that I started pursuing as part of CASSANDRA-11452 within the context of this infrastructure
and decided to forego at this point because the benefit it would provide over just using Caffeine
would not be substantial enough for the amount of new code, complexity, testing and risk it

The latter is a decision that can be very easily changed in the future.

> In process (uncompressed) page cache
> ------------------------------------
>                 Key: CASSANDRA-5863
>                 URL:
>             Project: Cassandra
>          Issue Type: Sub-task
>            Reporter: T Jake Luciani
>            Assignee: Branimir Lambov
>              Labels: performance
>             Fix For: 3.x
> Currently, for every read, the CRAR reads each compressed chunk into a byte[], sends
it to ICompressor, gets back another byte[] and verifies a checksum.  
> This process is where the majority of time is spent in a read request.  
> Before compression, we would have zero-copy of data and could respond directly from the
> It would be useful to have some kind of Chunk cache that could speed up this process
for hot data, possibly off heap.

This message was sent by Atlassian JIRA

View raw message