cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Burroughs (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-5863) Create a Decompressed Chunk [block] Cache
Date Thu, 06 Mar 2014 16:38:43 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13922720#comment-13922720
] 

Chris Burroughs commented on CASSANDRA-5863:
--------------------------------------------

FWIW for design comparison, the ZFS L2ARC compression is enabled whenever compression is enabled
for the dataset on disk, the rational being along the lines of "LZ4 is wicked fast so why
not?".  http://wiki.illumos.org/display/illumos/L2ARC+Compression 

> Create a Decompressed Chunk [block] Cache
> -----------------------------------------
>
>                 Key: CASSANDRA-5863
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
>             Project: Cassandra
>          Issue Type: New Feature
>          Components: Core
>            Reporter: T Jake Luciani
>            Assignee: Pavel Yaskevich
>              Labels: performance
>             Fix For: 2.1 beta2
>
>
> Currently, for every read, the CRAR reads each compressed chunk into a byte[], sends
it to ICompressor, gets back another byte[] and verifies a checksum.  
> This process is where the majority of time is spent in a read request.  
> Before compression, we would have zero-copy of data and could respond directly from the
page-cache.
> It would be useful to have some kind of Chunk cache that could speed up this process
for hot data. Initially this could be a off heap cache but it would be great to put these
decompressed chunks onto a SSD so the hot data lives on a fast disk similar to https://github.com/facebook/flashcache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message