My understanding is deleting the .json metadata file is the only way currently. If you search the user list archives, there are folks who are building tools to force compaction and rebuild sstables with the new size. I believe there's been a bit of talk of potentially including those tools as a pat of a future release.

Also, to answer your question about bloom filters, those are handled differently and if you run upgradesstables after altering the BF FP ratio, that will rebuild the BFs for each sstable.

On Mon, Jul 22, 2013 at 2:49 PM, Janne Jalkanen <> wrote:

I don't think upgradesstables is enough, since it's more of a "change this file to a new format but don't try to merge sstables and compact" -thing.

Deleting the .json -file is probably the only way, but someone more familiar with cassandra LCS might be able to tell whether manually editing the json file so that you drop all sstables a level might work? Since they would overflow the new level, they would compact soon, but the impact might be less drastic than just deleting the .json file (which takes everything to L0)...


On 22 Jul 2013, at 16:02, Keith Wright <> wrote:

Hi all,

   I know there has been several threads recently on this but I wanted to make sure I got a clear answer:  we are looking to increase our SSTable size for a couple of our LCS tables as well as chunk size (to match the SSD block size).   The largest table is at 500 GB across 6 nodes (RF 3, C* 1.2.4 VNodes).  I wanted to get feedback on the best way to make this change with minimal load impact on the cluster.  After I make the change, I understand that I need to force the nodes to re-compact the tables.  

Can this be done via upgrade sstables or do I need to shutdown the node, delete the .json file, and restart as some have suggested?  

I assume I can do this one node at a time?

If I change the bloom filter size, I assume I will need to force compaction again?  Using the same methodology?

Thank you