Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id B56DF200CF0 for ; Mon, 21 Aug 2017 23:39:11 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id B3586165418; Mon, 21 Aug 2017 21:39:11 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id D62B21653CD for ; Mon, 21 Aug 2017 23:39:10 +0200 (CEST) Received: (qmail 33245 invoked by uid 500); 21 Aug 2017 21:39:09 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 33234 invoked by uid 99); 21 Aug 2017 21:39:09 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 21 Aug 2017 21:39:09 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 04C0A1A0957 for ; Mon, 21 Aug 2017 21:39:09 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -100.002 X-Spam-Level: X-Spam-Status: No, score=-100.002 tagged_above=-999 required=6.31 tests=[RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id ENvPnL2g4gMt for ; Mon, 21 Aug 2017 21:39:07 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 0259860CE2 for ; Mon, 21 Aug 2017 21:39:07 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id D2D00E0D5B for ; Mon, 21 Aug 2017 21:39:05 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 4EC8F25399 for ; Mon, 21 Aug 2017 21:39:03 +0000 (UTC) Date: Mon, 21 Aug 2017 21:39:03 +0000 (UTC) From: "Enis Soztutar (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-18628) ZKPermissionWatcher blocks all ZK notifications MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Mon, 21 Aug 2017 21:39:11 -0000 [ https://issues.apache.org/jira/browse/HBASE-18628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16135879#comment-16135879 ] Enis Soztutar commented on HBASE-18628: --------------------------------------- So we could not find out why {{nodes}} stays {{non-null}}? Maybe executor is shutdown somehow? For the patch, should {{childrenChangedFuture}} be volatile? Also, there is no guarantee that there can ever be only 0 or 1 {{refreshNodes}} action in-flight at a time. What happens if more than 1 subsequent childrenChanged notifications arrive subsequently? Maybe we should remove the task preemption, but I don't remember why we even had that in the first place. > ZKPermissionWatcher blocks all ZK notifications > ----------------------------------------------- > > Key: HBASE-18628 > URL: https://issues.apache.org/jira/browse/HBASE-18628 > Project: HBase > Issue Type: Bug > Components: regionserver > Reporter: Mike Drob > Assignee: Mike Drob > Priority: Critical > Fix For: 3.0.0, 2.0.0-alpha-3 > > Attachments: HBASE-18628.branch-1.v5.patch, HBASE-18628.patch, HBASE-18628.v2.patch, HBASE-18628.v3.patch, HBASE-18628.v4.patch, HBASE-18628.v5.patch, jstack > > > Buckle up folks, we're going for a ride here. I've seeing this on a branch-2 based build, but I think the problem will affect branch-1 as well. I'm not able to easily reproduce the issue, but it will usually come up within an hour on a given cluster that I have, at which point the problem persists until an RS restart. I've been seeing the problem and paying attention for maybe two months, but I suspect it's been happening much longer than that. > h3. Problem > When running in a secure cluster, sometimes the ZK EventThread will get stuck on a permissions update and not be able to process new notifications. This happens to also block flush and snapshot, which is how we found it. > h3. Analysis > The main smoking gun is seeing this in repeated jstacks: > {noformat} > "main-EventThread" #43 daemon prio=5 os_prio=0 tid=0x00007f0b92644000 nid=0x6e69 waiting on condition [0x00007f0b6730f000] > java.lang.Thread.State: TIMED_WAITING (sleeping) > at java.lang.Thread.sleep(Native Method) > at org.apache.hadoop.hbase.security.access.ZKPermissionWatcher.nodeChildrenChanged(ZKPermissionWatcher.java:191) > at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:503) > at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522) > at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498) > {noformat} > That sleep is a 20ms sleep in an {{AtomicReference.compareAndSet}} loop - but it never gets past the condition. > {code} > while (!nodes.compareAndSet(null, nodeList)) { > try { > Thread.sleep(20); > } catch (InterruptedException e) { > LOG.warn("Interrupted while setting node list", e); > Thread.currentThread().interrupt(); > } > } > {code} > The warning never shows up in the logs, it just keeps looping and looping. The last relevant line from the watcher in logs is: > {noformat} > 2017-08-17 21:25:12,379 DEBUG org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: regionserver:22101-0x15df38884c80024, quorum=zk1:2181,zk2:2181,zk3:2181, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/acl > {noformat} > Which makes sense, because the code snippet is from permission watcher's {{nodeChildrenChanged}} handler. > The separate thread introduced in HBASE-14370 is present, but not doing anything. And this event hasn't gotten to the part where it splits off into a thread: > {noformat} > "zk-permission-watcher4-thread-1" #160 daemon prio=5 os_prio=0 tid=0x0000000001750800 nid=0x6fd9 waiting on condition [0x00007f0b5dce5000] > java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x00000007436ecea0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) > at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > h3. Solutions > There's a few approaches we can take to fix this, I think they are all complimentary. It might be useful to file subtasks or new issues for some of the solutions if they are longer term. > # Move flush and snapshot to ProcedureV2. This makes my proximate problem go away, but it's only relevant to branch-2 and master, and doesn't fix anything on branch-1. Also, Permissions updates would still get stuck, preventing future permissions updates. I think this is important long term for the robustness of the system, but not a viable short term fix. > # Add an Executor to ZookeeperWatcher and launch threads from there. Maybe we'd want to pull the Executor out of ZKPW, but that's not strictly necessary and can be optimized later -- if we're already threading, then adding another layer isn't a huge cost. > # Figure out the race condition or logic problem that causes {{nodes}} to be non-null above. I've tried looking at this and visual inspection isn't getting me anywhere. -- This message was sent by Atlassian JIRA (v6.4.14#64029)