cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeff Jirsa <jji...@gmail.com>
Subject Re: old big tombstone data file occupy much disk space
Date Fri, 01 Sep 2017 06:17:25 GMT
User defined compaction to do a single sstable compaction on just that sstable

It's a nodetool command in very recent versions, or a jmx method in older versions


-- 
Jeff Jirsa


> On Aug 31, 2017, at 11:04 PM, qf zhou <zhouqf2013@gmail.com> wrote:
> 
> I am using  a cluster with  3 nodes and  the cassandra version is 3.0.9. I have used
it about 6 months. Now each node has about 1.5T data in the disk.
> I found some sstables file are over 300G. Using the  sstablemetadata command,  I found
it:  Estimated droppable tombstones: 0.9622972799707109.
> It is obvious that too much tombstone data exists.
> The default_time_to_live = 8640000(100 days) and   gc_grace_seconds = 432000(5 days).
 Using nodetool  compactionstats, I found the some compaction processes exists.
> So I really  want to know how to clear tombstone data ?  otherwise the disk space will
cost too much.
> I really need some help, because some few people know cassandra in my company.  
> Thank you very much!
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@cassandra.apache.org
> For additional commands, e-mail: user-help@cassandra.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@cassandra.apache.org
For additional commands, e-mail: user-help@cassandra.apache.org


Mime
View raw message