Return-Path: X-Original-To: apmail-hbase-issues-archive@www.apache.org Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3ED8D18160 for ; Wed, 28 Oct 2015 05:35:28 +0000 (UTC) Received: (qmail 23731 invoked by uid 500); 28 Oct 2015 05:35:28 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 23681 invoked by uid 500); 28 Oct 2015 05:35:28 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 23664 invoked by uid 99); 28 Oct 2015 05:35:28 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Oct 2015 05:35:28 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id C474D2C1F63 for ; Wed, 28 Oct 2015 05:35:27 +0000 (UTC) Date: Wed, 28 Oct 2015 05:35:27 +0000 (UTC) From: "Duo Zhang (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-14004) [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14977769#comment-14977769 ] Duo Zhang commented on HBASE-14004: ----------------------------------- The problem here not only effects replication. {quote} As a result, the handler will rollback the Memstore and the later flushed HFile will also skip this record. {quote} What if the regionserver crashed before flushing HFile? I think the record will come back since it has already been persisted in WAL. Add a marker maybe a solution, but you need to check the marker everywhere when replaying WAL, and you still need to deal with the failure when placing marker... I do not think it is easy to do... The basic problem here is we may have inconsistency between memstore and WAL when we fail to sync WAL. A simple solution is killing the regionserver when we fail to sync WAL which means we will never rollback memstore but reconstruct it using WAL. We can make sure there is no difference between memstore and WAL under this situation. If we want to keep regionserver alive when syncing failed, then I think we need to find the real result of the sync operation. Maybe we could close the WAL file and check its length? Of course, if we have lost the connection to namenode, I think there is no simple solution other than killing the regionserver... Thanks. > [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin > ------------------------------------------------------------------------------------------------------------------- > > Key: HBASE-14004 > URL: https://issues.apache.org/jira/browse/HBASE-14004 > Project: HBase > Issue Type: Bug > Components: regionserver > Reporter: He Liangliang > Priority: Critical > Labels: replication, wal > > Looks like the current write path can cause inconsistency between memstore/hfile and WAL which cause the slave cluster has more data than the master cluster. > The simplified write path looks like: > 1. insert record into Memstore > 2. write record to WAL > 3. sync WAL > 4. rollback Memstore if 3 fails > It's possible that the HDFS sync RPC call fails, but the data is already (may partially) transported to the DNs which finally get persisted. As a result, the handler will rollback the Memstore and the later flushed HFile will also skip this record. -- This message was sent by Atlassian JIRA (v6.3.4#6332)