There is no error in the log about the streaming.

And thanks for the information, we will try 1.1 when we start upgrade.

koji

2012/6/5 aaron morton <aaron@thelastpickle.com>
Are their any errors in the logs about failed streaming ? 

If you are getting time outs 1.0.8 added a streaming socket timeout https://github.com/apache/cassandra/blob/trunk/CHANGES.txt#L323

Cheers

-----------------
Aaron Morton
Freelance Developer
@aaronmorton

On 4/06/2012, at 3:12 PM, koji wrote:


aaron morton <aaron <at> thelastpickle.com> writes:


Did you restart ? All good?
Cheers


-----------------
Aaron Morton
Freelance Developer
<at> aaronmorton
http://www.thelastpickle.com


On 27/04/2012, at 9:49 AM, Bryce Godfrey wrote:

This is the second node I’ve joined to my cluster in the last few days, and
so far both have become stuck at 100% on a large file according to netstats. 
This is on 1.0.9, is there anything I can do to make it move on besides
restarting Cassandra?  I don’t see any errors or warns in logs for
either server, and there is plenty of disk space.

 
On the sender side I see this:

Streaming to: /10.20.1.152

   /opt/cassandra/data/MonitoringData/PropertyTimeline-hc-80540-Data.db
sections=1 progress=82393861085/82393861085 - 100%

 
On the node joining I don’t see this file in netstats, and all pending
streams are sitting at 0%

 
 


Hi
we have the same problem (1.0.7) , our netstats log is like this:

Mode: NORMAL
Streaming to: /1.1.1.1
  /mnt/ebs1/cassandra-data/data/NemoModel/OfflineMessage-hc-3757-Data.db
  sections=1234 progress=3256666/3256666 - 100%
  /mnt/ebs1/cassandra-data/data/NemoModel/OfflineMessage-hc-3641-Data.db
  sections=4386 progress=0/1025272214 - 0%
  /mnt/ebs1/cassandra-data/data/NemoModel/OfflineMessage-hc-3761-Data.db
  sections=2956 progress=0/17826723 - 0%
  /mnt/ebs1/cassandra-data/data/NemoModel/OfflineMessage-hc-3730-Data.db
  sections=3792 progress=0/56066299 - 0%
  /mnt/ebs1/cassandra-data/data/NemoModel/OfflineMessage-hc-3760-Data.db
  sections=4384 progress=0/90941161 - 0%
  /mnt/ebs1/cassandra-data/data/NemoModel/OfflineMessage-hc-3687-Data.db
  sections=3958 progress=0/54729557 - 0%
  /mnt/ebs1/cassandra-data/data/NemoModel/OfflineMessage-hc-3762-Data.db
  sections=766 progress=0/2605165 - 0%
Streaming to: /1.1.1.2
  /mnt/ebs1/cassandra-data/data/NemoModel/OneWayFriend-hc-709-Data.db
  sections=3228 progress=29175698/29175698 - 100%
  /mnt/ebs1/cassandra-data/data/NemoModel/OneWayFriend-hc-789-Data.db
  sections=2102 progress=0/618938 - 0%
  /mnt/ebs1/cassandra-data/data/NemoModel/OneWayFriend-hc-765-Data.db
  sections=3044 progress=0/1996687 - 0%
  /mnt/ebs1/cassandra-data/data/NemoModel/OneWayFriend-hc-788-Data.db
  sections=2773 progress=0/1374636 - 0%
  /mnt/ebs1/cassandra-data/data/NemoModel/OneWayFriend-hc-729-Data.db
  sections=3150 progress=0/22111512 - 0%
Nothing streaming from /1.1.1.1
Nothing streaming from /1.1.1.2
Pool Name                    Active   Pending      Completed
Commands                        n/a         1       23825242
Responses                       n/a        25       19644808


After restart, the pending streams are cleared, but next time we do
"nodetool repair -pr" again, the pending still happened. And this always
happend on same node(we have total 12 nodes).

koji