If you're on 3.0 (3.0.6 or 3.0.8 or newer I don't remember which), TWCS was designed for ttl-only time series use cases

Alternatively, if you have IO to spare, you may find LCS works as well (it'll cause quite a bit more compaction, but a much higher chance to compact away tombstones)

There are also tombstone focused sub properties to more aggressively compact sstables that have a lot of tombstones - check the docs for "unchecked tombstone compaction" and "tombstone threshold" - enabling those will enable more aggressive automatic single-sstable compactions 

Jeff Jirsa

On Sep 2, 2017, at 7:10 AM, qf zhou <zhouqf2013@gmail.com> wrote:

Yes, your are right. I am using STCS compaction strategy with some kind of timeseries model. Too much disk space has been occupied.

What should I  do to stop  the  disk full ? 

 I only want to keep 100 days data most recently,  so I set default_time_to_live = 8640000(100 days ).

I know I need to do something to stop the disk space cost, but I really don’t know how to do it.

Here is the strategy of the big data table :

    AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '12', 'tombstone_threshold': '0.1', 'unchecked_tombstone_compaction': 'true'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 8640000
    AND gc_grace_seconds = 432000

在 2017年9月2日,下午7:34,Nicolas Guyomar <nicolas.guyomar@gmail.com> 写道:

your are using STCS compaction strategy with some kind of timeseries model, and you are going to end up with yor disk full!