Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 1FDCB200D08 for ; Thu, 7 Sep 2017 03:42:12 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 1E4911609BB; Thu, 7 Sep 2017 01:42:12 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 643A91609B8 for ; Thu, 7 Sep 2017 03:42:11 +0200 (CEST) Received: (qmail 68952 invoked by uid 500); 7 Sep 2017 01:42:09 -0000 Mailing-List: contact dev-help@zookeeper.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@zookeeper.apache.org Delivered-To: mailing list dev@zookeeper.apache.org Received: (qmail 68941 invoked by uid 99); 7 Sep 2017 01:42:09 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 07 Sep 2017 01:42:09 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id E061F1A5F15 for ; Thu, 7 Sep 2017 01:42:08 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id HwyczCj_GqIU for ; Thu, 7 Sep 2017 01:42:03 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 1EE5360D53 for ; Thu, 7 Sep 2017 01:42:03 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 47CD0E010F for ; Thu, 7 Sep 2017 01:42:02 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 4D4FF2415B for ; Thu, 7 Sep 2017 01:42:00 +0000 (UTC) Date: Thu, 7 Sep 2017 01:42:00 +0000 (UTC) From: "Cesar Stuardo (JIRA)" To: dev@zookeeper.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (ZOOKEEPER-2778) Potential server deadlock between follower sync with leader and follower receiving external connection requests. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 07 Sep 2017 01:42:12 -0000 [ https://issues.apache.org/jira/browse/ZOOKEEPER-2778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16156297#comment-16156297 ] Cesar Stuardo commented on ZOOKEEPER-2778: ------------------------------------------ Hey, Happy to help! Are we correct about the issue (regarding the path)? > Potential server deadlock between follower sync with leader and follower receiving external connection requests. > ---------------------------------------------------------------------------------------------------------------- > > Key: ZOOKEEPER-2778 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2778 > Project: ZooKeeper > Issue Type: Bug > Components: quorum > Affects Versions: 3.5.3 > Reporter: Michael Han > Assignee: Michael Han > Priority: Critical > > It's possible to have a deadlock during recovery phase. > Found this issue by analyzing thread dumps of "flaky" ReconfigRecoveryTest [1]. . Here is a sample thread dump that illustrates the state of the execution: > {noformat} > [junit] java.lang.Thread.State: BLOCKED > [junit] at org.apache.zookeeper.server.quorum.QuorumPeer.getElectionAddress(QuorumPeer.java:686) > [junit] at org.apache.zookeeper.server.quorum.QuorumCnxManager.initiateConnection(QuorumCnxManager.java:265) > [junit] at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:445) > [junit] at org.apache.zookeeper.server.quorum.QuorumCnxManager.receiveConnection(QuorumCnxManager.java:369) > [junit] at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:642) > [junit] > [junit] java.lang.Thread.State: BLOCKED > [junit] at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:472) > [junit] at org.apache.zookeeper.server.quorum.QuorumPeer.connectNewPeers(QuorumPeer.java:1438) > [junit] at org.apache.zookeeper.server.quorum.QuorumPeer.setLastSeenQuorumVerifier(QuorumPeer.java:1471) > [junit] at org.apache.zookeeper.server.quorum.Learner.syncWithLeader(Learner.java:520) > [junit] at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:88) > [junit] at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:1133) > {noformat} > The dead lock happens between the quorum peer thread which running the follower that doing sync with leader work, and the listener of the qcm of the same quorum peer that doing the receiving connection work. Basically to finish sync with leader, the follower needs to synchronize on both QV_LOCK and the qmc object it owns; while in the receiver thread to finish setup an incoming connection the thread needs to synchronize on both the qcm object the quorum peer owns, and the same QV_LOCK. It's easy to see the problem here is the order of acquiring two locks are different, thus depends on timing / actual execution order, two threads might end up acquiring one lock while holding another. > [1] org.apache.zookeeper.server.quorum.ReconfigRecoveryTest.testCurrentServersAreObserversInNextConfig -- This message was sent by Atlassian JIRA (v6.4.14#64029)