cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Versátil <>
Subject RES: Large sstables
Date Thu, 06 Sep 2018 17:51:18 GMT
Remove my email please


De: Vitali Dyachuk [] 
Enviada em: quinta-feira, 6 de setembro de 2018 08:00
Assunto: Re: Large sstables


What i have done is:
1) added more disks, so the compaction will carry on
2) when i've switched to LCS from STCS the STCS queues for the processing big sstables have
remained, so i've stopped these queues with nodetool stop -id queue_id
    and LCS compaction has started to process sstables , i'm using 3.0.17 C* with RF3

However the question remains if i use sstablesplit on 200Gb sstables to split it to 200Mb
files, will it help the LCS compaction?
Will LCS just take some data from that big sstable and try to merge with other sstable on
L0 adn other levels so i just have to wait until the LCS compaction will finish?



On Sun, Sep 2, 2018 at 9:55 AM shalom sagges < <>
> wrote:

If there are a lot of droppable tombstones, you could also run User Defined Compaction on
that (and on other) SSTable(s). 


This blog post explains it well:


On Fri, Aug 31, 2018 at 12:04 AM Mohamadreza Rostami < <>
> wrote:

Hi,Dear Vitali
The best option for you is migrating data to the new table and change portion key patterns
to a better distribution of data and you sstables become smaller but if your data already
have good distribution and your data is really big you must add new server to your datacenter,
if you change compassion strategy it has some risk.

> On Shahrivar 8, 1397 AP, at 19:54, Jeff Jirsa < <>
> wrote:
> Either of those are options, but there’s also sstablesplit to break it up a bit
> Switching to LCS can be a problem depending on how many sstables /overlaps you have 
> -- 
> Jeff Jirsa
>> On Aug 30, 2018, at 8:05 AM, Vitali Dyachuk < <>
> wrote:
>> Hi,
>> Some of the sstables got too big 100gb and more so they are not compactiong any more
so some of the disks are running out of space. I'm running C* 3.0.17, RF3 with 10 disks/jbod
with STCS.
>> What are my options? Completely delete all data on this node and rejoin it to the
cluster, change CS to LCS then run repair?
>> Vitali.
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: <>

> For additional commands, e-mail: <>


To unsubscribe, e-mail: <>

For additional commands, e-mail: <>

View raw message