Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 291B1200CF6 for ; Mon, 18 Sep 2017 19:41:09 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 27B831609DE; Mon, 18 Sep 2017 17:41:09 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 6CDB61609D4 for ; Mon, 18 Sep 2017 19:41:08 +0200 (CEST) Received: (qmail 90537 invoked by uid 500); 18 Sep 2017 17:41:07 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 90525 invoked by uid 99); 18 Sep 2017 17:41:07 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 18 Sep 2017 17:41:07 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 017FBC7CB4 for ; Mon, 18 Sep 2017 17:41:07 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id G9QaT2eD0Mcp for ; Mon, 18 Sep 2017 17:41:06 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 49A0D6104F for ; Mon, 18 Sep 2017 17:41:05 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 06255E0F33 for ; Mon, 18 Sep 2017 17:41:04 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 1E632253AB for ; Mon, 18 Sep 2017 17:41:01 +0000 (UTC) Date: Mon, 18 Sep 2017 17:41:01 +0000 (UTC) From: "Ted Yu (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-14004) [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Mon, 18 Sep 2017 17:41:09 -0000 [ https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170374#comment-16170374 ] Ted Yu commented on HBASE-14004: -------------------------------- TestReplicationSmallTests fails consistently on master. If I switch to commit f7a986cb67b55e36b58bf4b4934a2f32f29f538a, the test passes. Duo: Can you check ? > [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin > ------------------------------------------------------------------------------------------------------------------- > > Key: HBASE-14004 > URL: https://issues.apache.org/jira/browse/HBASE-14004 > Project: HBase > Issue Type: Bug > Components: regionserver, Replication > Reporter: He Liangliang > Assignee: Duo Zhang > Priority: Critical > Labels: replication, wal > Fix For: 3.0.0, 2.0.0-alpha-4 > > Attachments: HBASE-14004.patch, HBASE-14004-v1.patch, HBASE-14004-v2.patch, HBASE-14004-v2.patch, HBASE-14004-v3.patch > > > Looks like the current write path can cause inconsistency between memstore/hfile and WAL which cause the slave cluster has more data than the master cluster. > The simplified write path looks like: > 1. insert record into Memstore > 2. write record to WAL > 3. sync WAL > 4. rollback Memstore if 3 fails > It's possible that the HDFS sync RPC call fails, but the data is already (may partially) transported to the DNs which finally get persisted. As a result, the handler will rollback the Memstore and the later flushed HFile will also skip this record. > ================================== > This is a long lived issue. The above problem is solved by write path reorder, as now we will sync wal first before modifying memstore. But the problem may still exists as replication thread may read the new data before we return from hflush. See this document for more details: > https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit# > So we need to keep a sync length in WAL and tell replication wal reader this is limit when you read this wal file. -- This message was sent by Atlassian JIRA (v6.4.14#64029)