cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kenneth Brotman" <kenbrot...@yahoo.com.INVALID>
Subject RE: Urgent Problem - Disk full
Date Wed, 04 Apr 2018 12:30:32 GMT
Assuming the data model is good and there haven’t been any sudden jumps in memory use, it
seems like the normal thing to do is archive some of the old time series data that you don’t
care about.

 

Kenneth Brotman

 

From: Rahul Singh [mailto:rahul.xavier.singh@gmail.com] 
Sent: Wednesday, April 04, 2018 4:38 AM
To: user@cassandra.apache.org; user@cassandra.apache.org
Subject: Re: Urgent Problem - Disk full

 

Nothing a full repair won’t be able to fix. 


On Apr 4, 2018, 7:32 AM -0400, Jürgen Albersdorfer <Juergen.Albersdorfer@zweiradteile.net>,
wrote:



Hi,

I have an urgent Problem. - I will run out of disk space in near future.
Largest Table is a Time-Series Table with TimeWindowCompactionStrategy (TWCS) and default_time_to_live
= 0
Keyspace Replication Factor RF=3. I run C* Version 3.11.2
We have grown the Cluster over time, so SSTable files have different Dates on different Nodes.

>From Application Standpoint it would be safe to loose some of the oldest Data.

Is it safe to delete some of the oldest SSTable Files, which will no longer get touched by
TWCS Compaction any more, while Node is clean Shutdown? - And doing so for one Node after
another?

Or maybe there is a different way to free some disk space? - Any suggestions?

best regards
Jürgen Albersdorfer

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@cassandra.apache.org
For additional commands, e-mail: user-help@cassandra.apache.org


Mime
View raw message