Unfortunately no messages at ERROR level:
INFO [Thread-460] 2011-05-04 21:31:14,427 StreamInSession.java (line 121) Streaming of file /raiddrive/MDR/MeterRecords-f-2264-Data.db/(98339515276,197218618166)
progress=41536315392/98879102890 - 42% from org.apache.cassandra.streaming.StreamInSession@4eef9d00 failed: requesting a retry.
DEBUG [Thread-460] 2011-05-04 21:31:14,427 FileUtils.java (line 48) Deleting MeterRecords-tmp-f-3522-Data.db
DEBUG [Thread-460] 2011-05-04 21:31:16,410 IncomingTcpConnection.java (line 125) error reading from socket; closing
java.io.IOException: No space left on device
at sun.nio.ch.FileDispatcher.pwrite0(Native Method)
Not sure why we didn’t think to check available disk space to begin with, but it would have been nice to get an error regardless.
Thanks again for your help!
Could you provide some of the log messages when the receiver ran out of disk space ? Sounds like it should be at ERROR level.
On 6 May 2011, at 09:16, Sameer Farooqui wrote:
Just wanted to update you guys that we turned on DEBUG level logging on the decommissioned node and the node receiving the decommissioned node's range. We did this by editing <cassandra-home>/conf/log4j-server.properties and changing the log4j.rootLogger to DEBUG.
We ran decommission again and saw the that the receiving node was running out of disk space. The 184GB file was not able to fully stream to the receiving node.
We simply added more disk space to the receiving node and then decommission ran successfully.
Thanks for your help Aaron and also thanks for all those Cassandra articles on your blog. We found them helpful.
Accenture Technology Labs
On Thu, May 5, 2011 at 3:59 AM, aaron morton <firstname.lastname@example.org> wrote:
Yes that was what I was trying to say.
On 5 May 2011, at 18:52, Tyler Hobbs wrote:
On Thu, May 5, 2011 at 1:21 AM, Peter Schuller <email@example.com> wrote:
> It's no longer recommended to run nodetool compact regularly as it can mean
> that some tombstones do not get to be purged for a very long time.
I think this is a mis-typing; it used to be that major compactions
were necessary to remove tombstones, but this is no longer the case in
0.7 so that the need for major compactions is significantly lessened
or even eliminated. However, running major compactions won't cause
tombstones *not* to be removed; it's just not required *in order* for
them to be removed.
I think he was suggesting that any tombstones *left* in the large sstable generated by the major compaction won't be removed for a long time because that sstable itself will not participate in any minor compactions for a long time. (In general, rows in that sstable will not be merged for a long time.)
Software Engineer, DataStax
Maintainer of the pycassa Cassandra Python client library