Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 00215200C0C for ; Mon, 16 Jan 2017 02:30:31 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id F2D48160B4F; Mon, 16 Jan 2017 01:30:31 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 1D98E160B32 for ; Mon, 16 Jan 2017 02:30:30 +0100 (CET) Received: (qmail 14936 invoked by uid 500); 16 Jan 2017 01:30:30 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 14925 invoked by uid 99); 16 Jan 2017 01:30:30 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 16 Jan 2017 01:30:30 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id BE821C100E for ; Mon, 16 Jan 2017 01:30:29 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -1.199 X-Spam-Level: X-Spam-Status: No, score=-1.199 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id E8kI2m15jehQ for ; Mon, 16 Jan 2017 01:30:28 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 738735F2C5 for ; Mon, 16 Jan 2017 01:30:28 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id CCBDDE545A for ; Mon, 16 Jan 2017 01:30:27 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 6F0A325284 for ; Mon, 16 Jan 2017 01:30:26 +0000 (UTC) Date: Mon, 16 Jan 2017 01:30:26 +0000 (UTC) From: "Allan Yang (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HBASE-17471) Region Seqid will be out of order in WAL if using mvccPreAssign MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Mon, 16 Jan 2017 01:30:32 -0000 [ https://issues.apache.org/jira/browse/HBASE-17471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allan Yang updated HBASE-17471: ------------------------------- Description: mvccPreAssign was brought by HBASE-16698, which truly improved the performance of writing, especially in ASYNC_WAL scenario. But mvccPreAssign was only used in {{doMiniBatchMutate}}, not in Increment/Append path. If Increment/Append and batch put are using against the same region in parallel, then seqid of the same region may not monotonically increasing in the WAL. Since one write path acquires mvcc/seqid before append, and the other acquires in the append/sync consume thread. The out of order situation can easily reproduced by a simple UT, which was attached in the attachment. I modified the code to assert on the disorder: {code} if(this.highestSequenceIds.containsKey(encodedRegionName)) { assert highestSequenceIds.get(encodedRegionName) < sequenceid; } {code} I'd like to say, If we allow disorder in WALs, then this is not a issue. But as far as I know, if {{highestSequenceIds}} is not properly set, some WALs may not archive to oldWALs correctly. which I haven't figure out yet is that, will disorder in WAL cause data loss when recovering from disaster? If so, then it is a big problem need to be fixed. I have fix this problem in our costom1.1.x branch, my solution is using mvccPreAssign everywhere, making it un-configurable. Since mvccPreAssign it is indeed a better way than assign seqid in the ringbuffer thread while keeping handlers waiting for it. If anyone think it is doable, then I will port it to branch-1 and master branch and upload it. was: mvccPreAssign was brought by HBASE-16698, which truly improved the performance of writing, especially in ASYNC_WAL scenario. But mvccPreAssign was only used in {{doMiniBatchMutate}}, not in Increment/Append path. If Increment/Append and batch put are using against the same region in parallel, then seqid of the same region may not monotonically increasing in the WAL. Since one write path acquires mvcc/seqid before append, and the other acquires in the append/sync consume thread. The out of order situation can easily reproduced by a simple UT, which was attached in the attachment. I modified the code to assert on the disorder: {code} if(this.highestSequenceIds.containsKey(encodedRegionName)) { assert highestSequenceIds.get(encodedRegionName) < sequenceid; } {code} I'd like to say, If we allows disorder in WALs, then this is not a issue. But as far as I know, if {{highestSequenceIds}} is not properly set, some WALs may not archive to oldWALs correctly. which I haven't figure out yet is that, will disorder in WAL cause data loss when recovering from disaster? If so, then it is a big problem need to be fixed. I have fix this problem in our costom1.1.x branch, my solution is using mvccPreAssign everywhere, making it un-configurable. Since mvccPreAssign it is indeed a better way than assign seqid in the ringbuffer thread while keeping handlers waiting for it. If anyone think it is doable, then I will port it to branch-1 and master branch and upload it. > Region Seqid will be out of order in WAL if using mvccPreAssign > --------------------------------------------------------------- > > Key: HBASE-17471 > URL: https://issues.apache.org/jira/browse/HBASE-17471 > Project: HBase > Issue Type: Bug > Components: wal > Affects Versions: 2.0.0, 1.4.0 > Reporter: Allan Yang > Assignee: Allan Yang > Attachments: HBASE-17471.tmp > > > mvccPreAssign was brought by HBASE-16698, which truly improved the performance of writing, especially in ASYNC_WAL scenario. But mvccPreAssign was only used in {{doMiniBatchMutate}}, not in Increment/Append path. If Increment/Append and batch put are using against the same region in parallel, then seqid of the same region may not monotonically increasing in the WAL. Since one write path acquires mvcc/seqid before append, and the other acquires in the append/sync consume thread. > The out of order situation can easily reproduced by a simple UT, which was attached in the attachment. I modified the code to assert on the disorder: > {code} > if(this.highestSequenceIds.containsKey(encodedRegionName)) { > assert highestSequenceIds.get(encodedRegionName) < sequenceid; > } > {code} > I'd like to say, If we allow disorder in WALs, then this is not a issue. > But as far as I know, if {{highestSequenceIds}} is not properly set, some WALs may not archive to oldWALs correctly. > which I haven't figure out yet is that, will disorder in WAL cause data loss when recovering from disaster? If so, then it is a big problem need to be fixed. > I have fix this problem in our costom1.1.x branch, my solution is using mvccPreAssign everywhere, making it un-configurable. Since mvccPreAssign it is indeed a better way than assign seqid in the ringbuffer thread while keeping handlers waiting for it. > If anyone think it is doable, then I will port it to branch-1 and master branch and upload it. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)