incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Maxim Kramarenko <maxi...@trackstudio.com>
Subject Cassandra compaction disk space logic
Date Wed, 19 May 2010 21:14:05 GMT
Hi!

We have mail archive application, so we have a lot of data (30TB on 
multiple nodes) and should delete data after a few months of storing.

Questions are:

1) Compaction require extra space to process. What happend if node have 
no extra space for compaction ? Will it crash, or just stop compaction 
process ?

2) Is it possible to limit max SSTable file size ? I am worry about 
following situation: we have 1TB disk, 600 GB of data in single file and 
should delete 50 GB of outdated data. This can lead to another 550 GB 
data file generation, which cannot fit on the disk.

3) If we have 30 TB in data and replicas, how much disk space required 
to handle this, including adding new data, deleting old, compaction, etc ?

4) What occurs, if we run decommission, but target node have not enough 
disk space ?

Mime
View raw message