cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <>
Subject RE: Decommissioning node is causing broken pipe error
Date Fri, 06 May 2011 01:15:12 GMT
Unfortunately no messages at ERROR level:

INFO [Thread-460] 2011-05-04 21:31:14,427 (line 121) Streaming of file
         progress=41536315392/98879102890 - 42% from org.apache.cassandra.streaming.StreamInSession@4eef9d00
failed: requesting a retry.
DEBUG [Thread-460] 2011-05-04 21:31:14,427 (line 48) Deleting MeterRecords-tmp-f-3522-Data.db
DEBUG [Thread-460] 2011-05-04 21:31:16,410 (line 125) error reading
from socket; closing No space left on device
        at Method)
        at org.apache.cassandra.streaming.IncomingStreamReader.readFile(

Not sure why we didn't think to check available disk space to begin with, but it would have
been nice to get an error regardless.

Thanks again for your help!

From: aaron morton []
Sent: Thursday, May 05, 2011 4:54 PM
Subject: Re: Decommissioning node is causing broken pipe error

Could you provide some of the log messages when the receiver ran out of disk space ? Sounds
like it should be at ERROR level.


Aaron Morton
Freelance Cassandra Developer

On 6 May 2011, at 09:16, Sameer Farooqui wrote:

Just wanted to update you guys that we turned on DEBUG level logging on the decommissioned
node and the node receiving the decommissioned node's range. We did this by editing <cassandra-home>/conf/
and changing the log4j.rootLogger to DEBUG.

We ran decommission again and saw the that the receiving node was running out of disk space.
The 184GB file was not able to fully stream to the receiving node.

We simply added more disk space to the receiving node and then decommission ran successfully.

Thanks for your help Aaron and also thanks for all those Cassandra articles on your blog.
We found them helpful.

- Sameer
Accenture Technology Labs

On Thu, May 5, 2011 at 3:59 AM, aaron morton <<>>
Yes that was what I was trying to say.

Aaron Morton
Freelance Cassandra Developer

On 5 May 2011, at 18:52, Tyler Hobbs wrote:

On Thu, May 5, 2011 at 1:21 AM, Peter Schuller <<>>
> It's no longer recommended to run nodetool compact regularly as it can mean
> that some tombstones do not get to be purged for a very long time.
I think this is a mis-typing; it used to be that major compactions
were necessary to remove tombstones, but this is no longer the case in
0.7 so that the need for major compactions is significantly lessened
or even eliminated. However, running major compactions won't cause
tombstones *not* to be removed; it's just not required *in order* for
them to be removed.

I think he was suggesting that any tombstones *left* in the large sstable generated by the
major compaction won't be removed for a long time because that sstable itself will not participate
in any minor compactions for a long time.  (In general, rows in that sstable will not be merged
for a long time.)

Tyler Hobbs
Software Engineer, DataStax<>
Maintainer of the pycassa<> Cassandra Python client

This message is for the designated recipient only and may contain privileged, proprietary,
or otherwise private information. If you have received it in error, please notify the sender
immediately and delete the original. Any other use of the email by you is prohibited.

View raw message