Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 18FF7200B49 for ; Wed, 3 Aug 2016 20:09:23 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 17865160A86; Wed, 3 Aug 2016 18:09:23 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 34E12160A5D for ; Wed, 3 Aug 2016 20:09:22 +0200 (CEST) Received: (qmail 3103 invoked by uid 500); 3 Aug 2016 18:09:21 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 3091 invoked by uid 99); 3 Aug 2016 18:09:21 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 03 Aug 2016 18:09:21 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 3F12E2C0D5D for ; Wed, 3 Aug 2016 18:09:21 +0000 (UTC) Date: Wed, 3 Aug 2016 18:09:21 +0000 (UTC) From: "Joseph (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HBASE-16138) Cannot open regions after non-graceful shutdown due to deadlock with Replication Table MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Wed, 03 Aug 2016 18:09:23 -0000 [ https://issues.apache.org/jira/browse/HBASE-16138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph updated HBASE-16138: --------------------------- Description: If we shutdown an entire HBase cluster and attempt to start it back up, we have to run the WAL pre-log roll that occurs before opening up a region. Yet this pre-log roll must record the new WAL inside of ReplicationQueues. This method call ends up blocking on TableBasedReplicationQueues.getOrBlockOnReplicationTable(), because the Replication Table is not up yet. And we cannot assign the Replication Table because we cannot open any regions. This ends up deadlocking the entire cluster whenever we lose Replication Table availability. There are a few options that we can do, but none of them seem very good: 1. Depend on Zookeeper-based Replication until the Replication Table becomes available 2. Have a separate WAL for System Tables that does not perform any replication (see discussion at HBASE-14623) Or just have a seperate WAL for non-replicated vs replicated regions 3. Record the WAL log in the ReplicationQueue asynchronously (don't block opening a region on this event), which could lead to inconsistent Replication state The stacktrace: org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.recordLog(ReplicationSourceManager.java:376) org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.preLogRoll(ReplicationSourceManager.java:348) org.apache.hadoop.hbase.replication.regionserver.Replication.preLogRoll(Replication.java:370) org.apache.hadoop.hbase.regionserver.wal.FSHLog.tellListenersAboutPreLogRoll(FSHLog.java:637) org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:701) org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:600) org.apache.hadoop.hbase.regionserver.wal.FSHLog.(FSHLog.java:533) org.apache.hadoop.hbase.wal.DefaultWALProvider.getWAL(DefaultWALProvider.java:132) org.apache.hadoop.hbase.wal.RegionGroupingProvider.getWAL(RegionGroupingProvider.java:186) org.apache.hadoop.hbase.wal.RegionGroupingProvider.getWAL(RegionGroupingProvider.java:197) org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:240) org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:1883) org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:363) org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129) org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Does anyone have any suggestions/ideas/feedback? Attached a review board at: https://reviews.apache.org/r/50546/ It is still pretty rough, would just like some feedback on it. was: If we shutdown an entire HBase cluster and attempt to start it back up, we have to run the WAL pre-log roll that occurs before opening up a region. Yet this pre-log roll must record the new WAL inside of ReplicationQueues. This method call ends up blocking on TableBasedReplicationQueues.getOrBlockOnReplicationTable(), because the Replication Table is not up yet. And we cannot assign the Replication Table because we cannot open any regions. This ends up deadlocking the entire cluster whenever we lose Replication Table availability. There are a few options that we can do, but none of them seem very good: 1. Depend on Zookeeper-based Replication until the Replication Table becomes available 2. Have a separate WAL for System Tables that does not perform any replication (see discussion at HBASE-14623) Or just have a seperate WAL for non-replicated vs replicated regions 3. Record the WAL log in the ReplicationQueue asynchronously (don't block opening a region on this event), which could lead to inconsistent Replication state The stacktrace: org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.recordLog(ReplicationSourceManager.java:376) org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.preLogRoll(ReplicationSourceManager.java:348) org.apache.hadoop.hbase.replication.regionserver.Replication.preLogRoll(Replication.java:370) org.apache.hadoop.hbase.regionserver.wal.FSHLog.tellListenersAboutPreLogRoll(FSHLog.java:637) org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:701) org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:600) org.apache.hadoop.hbase.regionserver.wal.FSHLog.(FSHLog.java:533) org.apache.hadoop.hbase.wal.DefaultWALProvider.getWAL(DefaultWALProvider.java:132) org.apache.hadoop.hbase.wal.RegionGroupingProvider.getWAL(RegionGroupingProvider.java:186) org.apache.hadoop.hbase.wal.RegionGroupingProvider.getWAL(RegionGroupingProvider.java:197) org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:240) org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:1883) org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:363) org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129) org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) Does anyone have any suggestions/ideas/feedback? Attached a review board at: https://reviews.apache.org/r/50206/ It is still pretty rough, would just like some feedback on it. > Cannot open regions after non-graceful shutdown due to deadlock with Replication Table > -------------------------------------------------------------------------------------- > > Key: HBASE-16138 > URL: https://issues.apache.org/jira/browse/HBASE-16138 > Project: HBase > Issue Type: Sub-task > Components: Replication > Reporter: Joseph > Assignee: Joseph > Priority: Critical > Attachments: HBASE-16138.patch > > > If we shutdown an entire HBase cluster and attempt to start it back up, we have to run the WAL pre-log roll that occurs before opening up a region. Yet this pre-log roll must record the new WAL inside of ReplicationQueues. This method call ends up blocking on TableBasedReplicationQueues.getOrBlockOnReplicationTable(), because the Replication Table is not up yet. And we cannot assign the Replication Table because we cannot open any regions. This ends up deadlocking the entire cluster whenever we lose Replication Table availability. > There are a few options that we can do, but none of them seem very good: > 1. Depend on Zookeeper-based Replication until the Replication Table becomes available > 2. Have a separate WAL for System Tables that does not perform any replication (see discussion at HBASE-14623) > Or just have a seperate WAL for non-replicated vs replicated regions > 3. Record the WAL log in the ReplicationQueue asynchronously (don't block opening a region on this event), which could lead to inconsistent Replication state > The stacktrace: > org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.recordLog(ReplicationSourceManager.java:376) > org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.preLogRoll(ReplicationSourceManager.java:348) > org.apache.hadoop.hbase.replication.regionserver.Replication.preLogRoll(Replication.java:370) > org.apache.hadoop.hbase.regionserver.wal.FSHLog.tellListenersAboutPreLogRoll(FSHLog.java:637) > org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:701) > org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:600) > org.apache.hadoop.hbase.regionserver.wal.FSHLog.(FSHLog.java:533) > org.apache.hadoop.hbase.wal.DefaultWALProvider.getWAL(DefaultWALProvider.java:132) > org.apache.hadoop.hbase.wal.RegionGroupingProvider.getWAL(RegionGroupingProvider.java:186) > org.apache.hadoop.hbase.wal.RegionGroupingProvider.getWAL(RegionGroupingProvider.java:197) > org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:240) > org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:1883) > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:363) > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129) > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129) > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > java.lang.Thread.run(Thread.java:745) > Does anyone have any suggestions/ideas/feedback? > Attached a review board at: https://reviews.apache.org/r/50546/ > It is still pretty rough, would just like some feedback on it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)