cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sylvain Lebresne (Updated) (JIRA)" <>
Subject [jira] [Updated] (CASSANDRA-3432) Avoid large array allocation for compressed chunk offsets
Date Mon, 31 Oct 2011 17:31:32 GMT


Sylvain Lebresne updated CASSANDRA-3432:

    Attachment: 0001-Break-down-large-long-array.patch

Attaching patch to do this. I suppose in a perfect world we could reuse the added BigLongArray
class in our OpenBitSet implementation, but I didn't bothered for now (it wouldn't save much).
> Avoid large array allocation for compressed chunk offsets
> ---------------------------------------------------------
>                 Key: CASSANDRA-3432
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>    Affects Versions: 1.0.0
>            Reporter: Sylvain Lebresne
>            Assignee: Sylvain Lebresne
>            Priority: Minor
>              Labels: compression
>             Fix For: 1.0.2
>         Attachments: 0001-Break-down-large-long-array.patch
> For each compressed file we keep the chunk offsets in memory (a long[]). The size of
this array is directly proportional to the sstable file and the chunk_length_kb used, but
say for a 64GB sstable, we're talking ~8MB in memory by default.
> Without being absolutely huge, this probably makes the life of the GC harder than necessary
for the same reasons than CASSANDRA-2466, and this ticket proposes the same solution, i.e.
to break down those big array into smaller ones to ease fragmentation.
> Note that this is only a concern for size tiered compaction. But until leveled compaction
is battle tested, the default and we know nobody uses size tiered anymore, it's probably worth
making the optimization.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


View raw message