cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alain RODRIGUEZ <>
Subject Re: how to immediately delete tombstones
Date Mon, 04 Jun 2018 14:57:20 GMT

When you don't have any disk space anymore (or not much), there are things
you can do :

*Make some space:*
- Remove snapshots (nodetool clearsnapshot)
- Remove any heap dump that might be stored there
- Remove *old* -tmp- SSTables that could still be around.
- Truncate unused table / data: Truncate does not create tombstone, but
removes the files, after creating a snapshot (default behavior). Thus
truncating/dropping a table and removing the snapshot could produce
immediate disk space availability.
- Anything else than SSTable in this disk that can be removed?


*Add some space:*

- Add some disk space temporary (EBS, physical disk)
- Make sure to clean tombstones ('uncheck_tombstone_compaction: true' often
- Wait for tombstones to be compacted with all the data it shadows and disk
space to reduce
- Remove extra disk


*Play around, at the edge*:

- Look at the biggest SSTables that you can actually compact (be aware of
the compressed vs uncompressed sizes when monitoring - I think the thing
was that 'nodetool compactionstats -H' shows the values uncompressed)
- Use sstablemetadata to determine the ratio of the data that is droppable
- Run user defined compaction on these sstables specifically
- If it works and there is more disk space available, reproduce with bigger

In some complex cases removing commit logs helped as well, but this is
riskier already as it would be playing with consistency/durability

I'm using cassandra on a single node.

I would not play with commit logs with a single-node setup. But I imagine
it is not a production 'cluster' either.

Alain Rodriguez - @arodream -
France / Spain

The Last Pickle - Apache Cassandra Consulting

2018-06-02 8:29 GMT+01:00 Nitan Kainth <>:

> You can compact selective sstables using jmx Call.
> Sent from my iPhone
> On Jun 2, 2018, at 12:04 AM, onmstester onmstester <>
> wrote:
> Thanks for your replies
> But my current situation is that i do not have enough free disk for my
> biggest sstable, so i could not run major compaction or nodetool
> garbagecollect
> Sent using Zoho Mail <>
> ---- On Thu, 31 May 2018 22:32:32 +0430 *Alain RODRIGUEZ
> < <>>* wrote ----

View raw message