cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "" <>
Subject Re: Re: Cleaning up related issue
Date Mon, 19 Jun 2017 07:00:50 GMT
Thanks for the quick response. It's the existing node where the cleanup failed. It has a larger
volume than other nodes.
From: Akhil Mehra
Date: 2017-06-19 14:56
To: wxn002
CC: user
Subject: Re: Cleaning up related issue
Is the node with the large volume a new node or an existing node. If it is an existing node
is this the one where the node tool cleanup failed.


On 19/06/2017, at 6:40 PM, wrote:

After adding a new node, I started cleaning up task to remove the old data on the other 4
nodes. All went well except one node. The cleanup takes hours and the Cassandra daemon crashed
in the third node. I checked the node and found the crash was because of OOM. The Cassandra
data volume has zero space left. I removed the temporary files which I believe created during
the cleaning up process and started Cassanndra. 

The node joined the cluster successfully, but one thing I found. From the "nodetool status"
output, the node takes much data than other nodes. Nomally the load should be 700GB. But actually
it's 1000GB. Why it is larger? Please see the output below. 

UN   705.98 GB  256          40.4%             9180b7c9-fa0b-4bbe-bf62-64a599c01e58
UN  691.07 GB  256          39.9%             e24d13e2-96cb-4e8c-9d94-22498ad67c85
UN   623.73 GB  256          39.3%             385ad28c-0f3f-415f-9e0a-7fe8bef97e17
UN   779.38 GB  256          40.1%             46f37f06-9c45-492d-bd25-6fef7f926e38
UN  1022.7 GB  256          40.3%             a31b6088-0cb2-40b4-ac22-aec718dbd035


View raw message