From issues-return-856-archive-asf-public=cust-asf.ponee.io@zookeeper.apache.org Mon Sep 2 21:03:02 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id 86B83180674 for ; Mon, 2 Sep 2019 23:03:02 +0200 (CEST) Received: (qmail 91864 invoked by uid 500); 2 Sep 2019 23:05:15 -0000 Mailing-List: contact issues-help@zookeeper.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@zookeeper.apache.org Delivered-To: mailing list issues@zookeeper.apache.org Received: (qmail 91850 invoked by uid 99); 2 Sep 2019 23:05:15 -0000 Received: from mailrelay1-us-west.apache.org (HELO mailrelay1-us-west.apache.org) (209.188.14.139) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 02 Sep 2019 23:05:15 +0000 Received: from jira-he-de.apache.org (static.172.67.40.188.clients.your-server.de [188.40.67.172]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 9C01EE2E91 for ; Mon, 2 Sep 2019 21:03:00 +0000 (UTC) Received: from jira-he-de.apache.org (localhost.localdomain [127.0.0.1]) by jira-he-de.apache.org (ASF Mail Server at jira-he-de.apache.org) with ESMTP id 0F8CA782278 for ; Mon, 2 Sep 2019 21:03:00 +0000 (UTC) Date: Mon, 2 Sep 2019 21:03:00 +0000 (UTC) From: "chang lou (Jira)" To: issues@zookeeper.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (ZOOKEEPER-3531) Synchronization on ACLCache cause cluster to hang when network/disk issues happen during datatree serialization MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 chang lou created ZOOKEEPER-3531: ------------------------------------ Summary: Synchronization on ACLCache cause cluster to hang whe= n network/disk issues happen during datatree serialization Key: ZOOKEEPER-3531 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3531 Project: ZooKeeper Issue Type: Bug Affects Versions: 3.5.5, 3.5.4, 3.5.3, 3.5.2 Reporter: chang lou Attachments: fix.patch, generator.py During our ZooKeeper fault injection testing, we observed that sometimes th= e ZK cluster could hang (requests time out, node status shows ok). After in= specting the issue, we believe this is caused by I/O (serializing ACLCache)= inside a critical section. The bug is essentially similar to what is descr= ibed in ZooKeeper-2201. =C2=A0 org.apache.zookeeper.server.DataTree#serialize calls the aclCache.serialize= when doing dataree serialization, however, org.apache.zookeeper.server.Ref= erenceCountedACLCache#serialize could get stuck at OutputArchieve.writeInt = due to potential network/disk issues. This can cause the system experiences= hanging issues similar to ZooKeeper-2201 (any attempt to create/delete/mod= ify the DataNode will cause the leader to hang at the beginning of the requ= est processor chain). The root cause is the lock contention between: =C2=A0 * org.apache.zookeeper.server.DataTree#serialize -> org.apache.zookeeper.s= erver.ReferenceCountedACLCache#serialize=C2=A0 * PrepRequestProcessor#getRecordForPath -> org.apache.zookeeper.server.Dat= aTree#getACL(org.apache.zookeeper.server.DataNode) -> org.apache.zookeeper.= server.ReferenceCountedACLCache#convertLong When the snapshot gets stuck in acl serialization, it would prevent all oth= er operations to ReferenceCountedACLCache. Since getRecordForPath calls Ref= erenceCountedACLCache#convertLong, any op triggering getRecordForPath will = cause the leader to hang at the beginning of the request processor chain: =C2=A0 =C2=A0 {code:java} org.apache.zookeeper.server.ReferenceCountedACLCache.convertLong(ReferenceC= ountedACLCache.java:87) org.apache.zookeeper.server.DataTree.getACL(DataTree.java:734) =C2=A0=C2=A0=C2=A0- locked org.apache.zookeeper.server.DataNode@4a062b7d org.apache.zookeeper.server.ZKDatabase.aclForNode(ZKDatabase.java:371) org.apache.zookeeper.server.PrepRequestProcessor.getRecordForPath(PrepReque= stProcessor.java:170) =C2=A0=C2=A0=C2=A0- locked java.util.ArrayDeque@3f7394f7 org.apache.zookeeper.server.PrepRequestProcessor.pRequest2Txn(PrepRequestPr= ocessor.java:417) org.apache.zookeeper.server.PrepRequestProcessor.pRequest(PrepRequestProces= sor.java:757) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.j= ava:145) {code} =C2=A0 SImilar to ZooKeeper-2201, the leader can still send out heartbeats so the = cluster will not recover until the network/disk issue resolves.=C2=A0=C2=A0 =C2=A0 Steps to reproduce this bug: # start a cluster with 1 leader and n followers # manually create some ACLs, to enlarge the window of dumping acls so it w= ould be more likely to hang at serializing ACLCache when delay happens. (we= wrote a script to generate such workloads, see attachments) # inject long network/disk write delays and run some benchmarks to trigger= snapshots # once stuck, you should observe new requests to the cluster would fail. =C2=A0 Essentially the core problem is the OutputArchive write should not be kept = inside this synchronization block. So a straightforward solution is to move= writes out of sync block: do a copy inside the sync block and perform vuln= erable network writes afterwards. The patch for this solution is attached a= nd verified.=C2=A0 Another more systematic fix is perhaps replacing all syn= chronized methods in the ReferenceCountedACLCache with ConcurrentHashMap.= =C2=A0 =C2=A0 We double checked that the issue remains in the latest version of master br= anch (68c21988d55c57e483370d3ee223c22da2d1bbcf).=C2=A0 =C2=A0 Attachments are 1) patch for fix and regression test 2) scripts to generate= workloads to fill ACL cache -- This message was sent by Atlassian Jira (v8.3.2#803003)