cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bartłomiej Romański (JIRA) <j...@apache.org>
Subject [jira] [Comment Edited] (CASSANDRA-5371) Perform size-tiered compactions in L0 ("hybrid compaction")
Date Thu, 26 Dec 2013 14:25:56 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13856888#comment-13856888
] 

Bartłomiej Romański edited comment on CASSANDRA-5371 at 12/26/13 2:23 PM:
--------------------------------------------------------------------------

Hi,

We hit the same bug in production recently. We walked around it by switching to STCS for a
few days, letting it stabilize and then going back to LCS. Quite long, but fully successful
trip.

In our case we have a lot of sstables at L0 as a result of migration. Because of another bug
in sstableloader (CASSANDRA-6527), we finally ended up simply copying all sstable files from
the old cluster to the new one.

After the migration we had over 10k sstables (160MB per file) on each node. Of course, STCS-fallback
activates automatically in that case.

I wonder if similar situation will happen after the classic bootstrap? Will streaming during
bootstrapping put sstables at L0 or at the original level?

If it will put them all at L0 then I'm not sure if falling back to STCS is the best way to
handle the situation. I've read the comment in the code and I'm aware why it is a good thing
to do if we have to many sstables at L0 as a result of too many random inserts. We have a
lot of sstables, each of them covers the whole ring, there's simply no better option. 

However, after the bootstrap situation looks a bit different. The loaded sstables already
have vary small ranges! We just have to tidy up a bit and everything should be OK. STCS ignores
that completely and after a while we have a bit less sstables but each of them covers the
whole ring instead of just a small part. I believe that in that case letting LCS do the job
is a better option that allowing STCS mix everything up before.

Is there a way to disable STCS fallback? I'll be glad to test this option the next time we
do similar operation.



was (Author: br1985):
Hi,

We hit the same bug in production recently. We walked around it by switching to STCS for a
few days, letting it stabilize and then going back to LCS. Quite long, but fully successful
trip.

In our case we have a lot of sstables at L0 as a result of migration. Because of another bug
in sstableloader (CASSANDRA-6527), we finally ended up simply copying all sstable files from
the old cluster to the new one.

After the migration we had over 10k sstables (160MB per file) on each node. Of course, STCS-fallback
activates automatically in that case.

I wonder is similar situation will happen after the classic bootstrap? Will streaming during
bootstrapping put sstables at L0 or at the original level?

If it will put them all at L0 then I'm not sure if falling back to STCS is the best way to
handle the situation. I've read the comment in the code and I'm aware why it is a good thing
to do if we have to many sstables at L0 as a result of too many random inserts. We have a
lot of sstables, each of them covers the whole ring, there's simply no better option. 

However, after the bootstrap situation looks a bit different. The loaded sstables already
have vary small ranges! We just have to tidy up a bit and everything should be OK. STCS ignores
that completely and after a while we have a bit less sstables but each of them covers the
whole ring instead of just a small part. I believe that in that case letting LCS do the job
is a better option that allowing STCS mix everything up before.

Is there a way to disable STCS fallback? I'll be glad to test this option the next time we
do similar operation.


> Perform size-tiered compactions in L0 ("hybrid compaction")
> -----------------------------------------------------------
>
>                 Key: CASSANDRA-5371
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5371
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Jonathan Ellis
>            Assignee: Jonathan Ellis
>             Fix For: 2.0 beta 1
>
>         Attachments: HybridCompactionStrategy.java
>
>
> If LCS gets behind, read performance deteriorates as we have to check bloom filters on
man sstables in L0.  For wide rows, this can mean having to seek for each one since the BF
doesn't help us reject much.
> Performing size-tiered compaction in L0 will mitigate this until we can catch up on merging
it into higher levels.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message