Too easy. Does anybody have a more difficult approach? :) Just kidding. Thanks, Aaron.

On Mon, Feb 13, 2012 at 11:43 AM, aaron morton <> wrote:
I am nursing an overloaded 0.6 cluster
Shine on you crazy diamond.

If you have some additional storage available I would:

1) Allocate a data directory for each node, stop the node and add the directory to the config for each node in the DataFileDirectory

2) The nodes will now create SSTables in the new directory. Bring it back up and compact. 

3) Once you have compacted I would recommend stopping the node, moving the SSTables back to the local node and removing the additional data file directory. 

Hope that helps.

Aaron Morton
Freelance Developer

On 14/02/2012, at 7:10 AM, Dan Retzlaff wrote:

Hi all,

I am nursing an overloaded 0.6 cluster through compaction to get its disk usage under 50%. Many rows' content have been replaced so that after compaction there will be plenty of room, but a couple of nodes are currently at 95%.

One strategy I considered is temporarily moving a couple of the larger SSTables to an NFS mount and putting symlinks in the data directory. However, Jonathan says that Cassandra is not okay with symlinked SSTables [1]. Can someone elaborate on why this won't work?

If a hack like this is not possible, then I am at a loss for options other than ungracefully dropping the node from the cluster and reconstructing its data from other replicas. If anyone has recovered from a similar situation, I would appreciate your advice.