Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id A7E9B200D1F for ; Fri, 29 Sep 2017 04:05:05 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id A657C1609EC; Fri, 29 Sep 2017 02:05:05 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id E0ADD1609CD for ; Fri, 29 Sep 2017 04:05:04 +0200 (CEST) Received: (qmail 11542 invoked by uid 500); 29 Sep 2017 02:05:03 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 11521 invoked by uid 99); 29 Sep 2017 02:05:03 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 29 Sep 2017 02:05:03 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 5856ED3C99 for ; Fri, 29 Sep 2017 02:05:03 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id eE_volSCXOXZ for ; Fri, 29 Sep 2017 02:05:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id E20C75FB4E for ; Fri, 29 Sep 2017 02:05:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 1AB0AE0931 for ; Fri, 29 Sep 2017 02:05:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id B32F9242AB for ; Fri, 29 Sep 2017 02:05:00 +0000 (UTC) Date: Fri, 29 Sep 2017 02:05:00 +0000 (UTC) From: "Duo Zhang (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-14004) [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Fri, 29 Sep 2017 02:05:05 -0000 [ https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16185257#comment-16185257 ] Duo Zhang commented on HBASE-14004: ----------------------------------- Will do. Thanks for the reminding sir. > [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin > ------------------------------------------------------------------------------------------------------------------- > > Key: HBASE-14004 > URL: https://issues.apache.org/jira/browse/HBASE-14004 > Project: HBase > Issue Type: Bug > Components: regionserver, Replication > Reporter: He Liangliang > Assignee: Duo Zhang > Priority: Critical > Labels: replication, wal > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-14004.patch, HBASE-14004-v1.patch, HBASE-14004-v2.patch, HBASE-14004-v2.patch, HBASE-14004-v3.patch > > > Looks like the current write path can cause inconsistency between memstore/hfile and WAL which cause the slave cluster has more data than the master cluster. > The simplified write path looks like: > 1. insert record into Memstore > 2. write record to WAL > 3. sync WAL > 4. rollback Memstore if 3 fails > It's possible that the HDFS sync RPC call fails, but the data is already (may partially) transported to the DNs which finally get persisted. As a result, the handler will rollback the Memstore and the later flushed HFile will also skip this record. > ================================== > This is a long lived issue. The above problem is solved by write path reorder, as now we will sync wal first before modifying memstore. But the problem may still exists as replication thread may read the new data before we return from hflush. See this document for more details: > https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit# > So we need to keep a sync length in WAL and tell replication wal reader this is limit when you read this wal file. -- This message was sent by Atlassian JIRA (v6.4.14#64029)