cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rick Branson (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-5371) Perform size-tiered compactions in L0
Date Fri, 03 May 2013 15:38:16 GMT


Rick Branson commented on CASSANDRA-5371:

Is this just waiting on [~tjake]'s test to backport to 1.2? 

Yesterday we bootstrapped our first new node on our first LCS cluster where each node only
had ~50GB of data, and it took 6 hours to complete the bootstrap, even after running the CPUs
hot by bumping compaction throughput up to 64MB. We probably could have stood to raise this
to 128MB/sec and pegged them, but I dread to think of what this would be like if we moved
some larger, read-heavy data sets to Cassandra under LCS. Jake seems to think this patch will
help with that.

This is on an EC2 hi1.4xlarge, which is a 16-core box w/60GB RAM, 2TB of SSD storage, and

We also have a cluster of m1.xlarges (4-core, 15G, 2TB rust) each with ~300GB of relatively
cold data under STCS. Considering the spinning rust cluster w/1GigE and 16MB/s compaction
throughput can bootstrap a new node in < 2 hours with 6x as much data we will definitely
be trying this HCS on the SSD cluster running LCS at the moment.
> Perform size-tiered compactions in L0
> -------------------------------------
>                 Key: CASSANDRA-5371
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Jonathan Ellis
>            Assignee: Jonathan Ellis
>             Fix For: 2.0
>         Attachments:
> If LCS gets behind, read performance deteriorates as we have to check bloom filters on
man sstables in L0.  For wide rows, this can mean having to seek for each one since the BF
doesn't help us reject much.
> Performing size-tiered compaction in L0 will mitigate this until we can catch up on merging
it into higher levels.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:

View raw message