cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From qf zhou <zhouqf2...@gmail.com>
Subject Re: old big tombstone data file occupy much disk space
Date Sat, 02 Sep 2017 14:10:00 GMT

Yes, your are right. I am using STCS compaction strategy with some kind of timeseries model.
Too much disk space has been occupied.

What should I  do to stop  the  disk full ? 

 I only want to keep 100 days data most recently,  so I set default_time_to_live = 8640000(100
days ).

I know I need to do something to stop the disk space cost, but I really don’t know how to
do it.


Here is the strategy of the big data table :

    AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',
'max_threshold': '32', 'min_threshold': '12', 'tombstone_threshold': '0.1', 'unchecked_tombstone_compaction':
'true'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 8640000
    AND gc_grace_seconds = 432000



> 在 2017年9月2日,下午7:34,Nicolas Guyomar <nicolas.guyomar@gmail.com>
写道:
> 
> your are using STCS compaction strategy with some kind of timeseries model, and you are
going to end up with yor disk full!


Mime
View raw message