cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pavel Yaskevich (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-5863) In process (uncompressed) page cache
Date Tue, 22 Apr 2014 04:48:17 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13976380#comment-13976380
] 

Pavel Yaskevich commented on CASSANDRA-5863:
--------------------------------------------

[~jbellis] I tried to directly replace blocks of the compressed file with uncompressed content
(align all of the blocks to 64KB boundary effectively creating file holes, mprotect some of
the blocks to be writable, write uncompressed contents), keeping per file block global heat
map based on key cache, but that didn't work out.

> In process (uncompressed) page cache
> ------------------------------------
>
>                 Key: CASSANDRA-5863
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
>             Project: Cassandra
>          Issue Type: New Feature
>          Components: Core
>            Reporter: T Jake Luciani
>            Assignee: Pavel Yaskevich
>              Labels: performance
>             Fix For: 2.1 beta2
>
>
> Currently, for every read, the CRAR reads each compressed chunk into a byte[], sends
it to ICompressor, gets back another byte[] and verifies a checksum.  
> This process is where the majority of time is spent in a read request.  
> Before compression, we would have zero-copy of data and could respond directly from the
page-cache.
> It would be useful to have some kind of Chunk cache that could speed up this process
for hot data. Initially this could be a off heap cache but it would be great to put these
decompressed chunks onto a SSD so the hot data lives on a fast disk similar to https://github.com/facebook/flashcache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message