cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Peter Schuller (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-3832) gossip stage backed up due to migration manager future de-ref
Date Mon, 06 Feb 2012 05:23:59 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-3832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13201040#comment-13201040
] 

Peter Schuller commented on CASSANDRA-3832:
-------------------------------------------

I added logging to see which node it's waiting on a response from, and quickly logged into
that node to catch it red handed - it was sitting in the exact same place in the migration
manager on the migration stage:

{code}
"MigrationStage:1" daemon prio=10 tid=0x00007f18ec4dc800 nid=0x1d64 waiting on condition [0x0000000043391000]
   java.lang.Thread.State: TIMED_WAITING (parking)
	at sun.misc.Unsafe.park(Native Method)
	- parking to wait for  <0x000000050157fdd0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
	at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2116)
	at org.apache.cassandra.net.AsyncResult.get(AsyncResult.java:61)
	at org.apache.cassandra.service.MigrationManager$1.runMayThrow(MigrationManager.java:124)
	at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:662)
{code}

I guess we're triggering distributed deadlock "internally" within the migration stage even
though we fixed it so that the gossip stage wouldn't be backed up. If my understanding is
correct, this is because all nodes, when a node is marked alive, just know that it has a different
schema - not who has the "newer" schema. So when a node joins it gets migration messages from
others while it also tries to send migration messages to others and waiting on the response.
Whenever it sends a migration message to someone whose migration stage is busy waiting on
a response from the node in question - deadlock (until timeout).

                
> gossip stage backed up due to migration manager future de-ref 
> --------------------------------------------------------------
>
>                 Key: CASSANDRA-3832
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-3832
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.1
>            Reporter: Peter Schuller
>            Assignee: Peter Schuller
>            Priority: Blocker
>             Fix For: 1.1
>
>         Attachments: CASSANDRA-3832-trunk-dontwaitonfuture.txt
>
>
> This is just bootstrapping a ~ 180 trunk cluster. After a while, a
> node I was on was stuck with thinking all nodes are down, because
> gossip stage was backed up, because it was spending a long time
> (multiple seconds or more, I suppose RPC timeout maybe) doing the
> following. Cluster-wide restart -> back to normal. I have not
> investigated further.
> {code}
> "GossipStage:1" daemon prio=10 tid=0x00007f9d5847a800 nid=0xa6fc waiting on condition
[0x000000004345f000]
>    java.lang.Thread.State: WAITING (parking)
> 	at sun.misc.Unsafe.park(Native Method)
> 	- parking to wait for  <0x00000005029ad1c0> (a java.util.concurrent.FutureTask$Sync)
> 	at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
> 	at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
> 	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:969)
> 	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1281)
> 	at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
> 	at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> 	at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:364)
> 	at org.apache.cassandra.service.MigrationManager.rectifySchema(MigrationManager.java:132)
> 	at org.apache.cassandra.service.MigrationManager.onAlive(MigrationManager.java:75)
> 	at org.apache.cassandra.gms.Gossiper.markAlive(Gossiper.java:802)
> 	at org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:918)
> 	at org.apache.cassandra.gms.GossipDigestAckVerbHandler.doVerb(GossipDigestAckVerbHandler.java:68)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message