Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id D3311200C3C for ; Mon, 3 Apr 2017 17:21:34 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id D1B02160B8F; Mon, 3 Apr 2017 15:21:34 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id EF9D4160B76 for ; Mon, 3 Apr 2017 17:21:33 +0200 (CEST) Received: (qmail 56167 invoked by uid 500); 3 Apr 2017 15:21:28 -0000 Mailing-List: contact dev-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hbase.apache.org Delivered-To: mailing list dev@hbase.apache.org Received: (qmail 56150 invoked by uid 99); 3 Apr 2017 15:21:27 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 03 Apr 2017 15:21:27 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 5D369C0B29 for ; Mon, 3 Apr 2017 15:21:27 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 0.879 X-Spam-Level: X-Spam-Status: No, score=0.879 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_REPLY=1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id NJw2-l3aroXG for ; Mon, 3 Apr 2017 15:21:25 +0000 (UTC) Received: from mail-pg0-f53.google.com (mail-pg0-f53.google.com [74.125.83.53]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 772225FCD9 for ; Mon, 3 Apr 2017 15:21:24 +0000 (UTC) Received: by mail-pg0-f53.google.com with SMTP id 81so123586543pgh.2 for ; Mon, 03 Apr 2017 08:21:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-transfer-encoding; bh=XfpMDejvtZplU8zUfvYN4dX6EQHeeFrZ3G1eIcBOKRs=; b=mFjKEE9S20Q0SprD7xzsz11ieNdLLgH9cobdKTJYGl2qh3Fnm9L9ccQ/otXXqik1rw LC0hMkuUfICGcqF5MW0jjYmUEI1yBqS+L6X0O9F4NiTW9fao8y+ofX+LF6UEuNsdzaIX O/6DKedp+o82j9Ue/jz5cX6rP3Mnwgxy5o1S+NzF5Mgb/mqTLAaz6PX6q8JMheGGqCi8 wC5QB6DVUe7l9zVLNaD7taZqL5G40Az3rIywyurUGqIJYyseDd7kINHazy6/D+ops8zu cMlgsNBN8NyjDGEjz7MwDLDJHwJjjXaHGxrb0sj/uzrUyGwmANYlNq2p3YcCyETj34pZ Km3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-transfer-encoding; bh=XfpMDejvtZplU8zUfvYN4dX6EQHeeFrZ3G1eIcBOKRs=; b=khFIETuaZ8t+lHaDWb44jj9D2I7amYUlARSpbeD5kB7wD/I4Pp1q6sqVAITL0nugon RhGLG7pU4Oaqdt2H0E+nhVwn3oUt84FZbt740824zCFYqDxEEKu+uTEsjaLvf5i9P86E wx7ju41+3VxmRSlOJejnRDyBznwRbYz//ntkG2Q0K/aLqBA6n1bYnTXISvv0ZU0Q7Lva /EX8d4e01xefV5icy2RlJOvHebaWX44ydyRuaeI5VutmvpEjUNdfWtLbdRfueyOjXP0a TlpnBQEjAInEBr/BDitRsx7po+Wq43q0goHWkEUt8dQ4NM3VCzwwLWgU+w3rRgLA+GB8 KnkA== X-Gm-Message-State: AFeK/H0YJGErtxbg3yuJ7lMIGEm8OjWK7/mGrc+4+P/i4dMjEP134+doIpujEdz90cHqmw== X-Received: by 10.99.163.72 with SMTP id v8mr18960043pgn.115.1491232882842; Mon, 03 Apr 2017 08:21:22 -0700 (PDT) Received: from hw10447.local ([167.102.188.146]) by smtp.googlemail.com with ESMTPSA id x21sm26644356pfa.71.2017.04.03.08.21.21 for (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 03 Apr 2017 08:21:21 -0700 (PDT) Message-ID: <58E2686F.5070901@gmail.com> Date: Mon, 03 Apr 2017 11:21:19 -0400 From: Josh Elser User-Agent: Postbox 3.0.11 (Macintosh/20140602) MIME-Version: 1.0 To: dev@hbase.apache.org Subject: Re: How threads interact with each other in HBase References: <58DA995C.4000501@apache.org> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit archived-at: Mon, 03 Apr 2017 15:21:35 -0000 Yes, you are correct that there is an edge condition here when there is abrupt power-failure to a node. HDFS guards against most of this as there are multiple copies of your data spread across racks. However, if you have abrupt power failure across multiple racks (or your entire hardware), yes, you would likely lose some data. Having some form of redundant power-supply is a common deployment choice that further mitigates this risk. If this is not documented clearly enough, patches are welcome to improve this :) IMO, all of this is an implementation detail, though, as I believe you already understand. It does not change the fact that architecturally/academically, HBase is a consistent system. 杨苏立 Yang Su Li wrote: > I understand why HBase by default does not use hsync -- it does come with > big performance cost (though for FSYNC_WAL which is not the default option, > you should probably do it because the documentation explicitly promised > it). > > > I just want to make sure my description about HBase is accurate, including > the durability aspect. > > On Sun, Apr 2, 2017 at 12:19 PM, Ted Yu wrote: > >> Suli: >> Have you looked at HBASE-5954 ? >> >> It gives some background on why hbase code is formulated the way it >> currently is. >> >> Cheers >> >> On Sun, Apr 2, 2017 at 9:36 AM, 杨苏立 Yang Su Li wrote: >> >>> Don't your second paragraph just prove my point? -- If data is not >>> persisted to disk, then it is not durable. That is the definition of >>> durability. >>> >>> If you want the data to be durable, then you need to call hsync() instead >>> of hflush(), and that would be the correct behavior if you use FSYNC_WAL >>> flag (per HBase documentation). >>> >>> However, HBase does not do that. >>> >>> Suli >>> >>> On Sun, Apr 2, 2017 at 11:26 AM, Josh Elser >> wrote: >>>> No, that's not correct. HBase would, by definition, not be a >>>> consistent database if a write was not durable when a client sees a >>>> successful write. >>>> >>>> The point that I will concede to you is that the hflush call may, in >>>> extenuating circumstances, may not be completely durable. For example, >>>> HFlush does not actually force the data to disk. If an abrupt power >>>> failure happens before this data is pushed to disk, HBase may think >>>> that data was durable when it actually wasn't (at the HDFS level). >>>> >>>> On Thu, Mar 30, 2017 at 4:26 PM, 杨苏立 Yang Su Li >>>> wrote: >>>>> Also, please correct me if I am wrong, but I don't think a put is >>> durable >>>>> when an RPC returns to the client. Just its corresponding WAL entry >> is >>>>> pushed to the memory of all three data nodes, so it has a low >>> probability >>>>> of being lost. But nothing is persisted at this point. >>>>> >>>>> And this is true no mater you use SYNC_WAL or FSYNC_WAL flag. >>>>> >>>>> On Tue, Mar 28, 2017 at 12:11 PM, Josh Elser >>> wrote: >>>>>> 1.1 -> 2: don't forget about the block cache which can invalidate >> the >>>> need >>>>>> for any HDFS read. >>>>>> >>>>>> I think you're over-simplifying the write-path quite a bit. I'm not >>> sure >>>>>> what you mean by an 'asynchronous write', but that doesn't exist at >>> the >>>>>> HBase RPC layer as that would invalidate the consistency guarantees >>> (if >>>> an >>>>>> RPC returns to the client that data was "put", then it is durable). >>>>>> >>>>>> Going off of memory (sorry in advance if I misstate something): the >>>>>> general way that data is written to the WAL is a "group commit". You >>>> have >>>>>> many threads all trying to append data to the WAL -- performance >> would >>>> be >>>>>> terrible if you serially applied all of these writes. Instead, many >>>> writes >>>>>> can be accepted and a the caller receives a Future. The caller must >>> wait >>>>>> for the Future to complete. What's happening behind the scene is >> that >>>> the >>>>>> writes are being bundled together to reduce the number of syncs to >> the >>>> WAL >>>>>> ("grouping" the writes together). When one caller's future would >>>> complete, >>>>>> what really happened is that the write/sync which included the >>> caller's >>>>>> update was committed (along with others). All of this is happening >>>> inside >>>>>> the RS's implementation of accepting an update. >>>>>> >>>>>> https://github.com/apache/hbase/blob/55d6dcaf877cc5223e67973 >>>>>> 6eb613173229c18be/hbase-server/src/main/java/org/ >> apache/hadoop/hbase/ >>>>>> regionserver/wal/FSHLog.java#L74-L106 >>>>>> >>>>>> >>>>>> 杨苏立 Yang Su Li wrote: >>>>>> >>>>>>> The attachment can be found in the following URL: >>>>>>> http://pages.cs.wisc.edu/~suli/hbase.pdf >>>>>>> >>>>>>> Sorry for the inconvenience... >>>>>>> >>>>>>> >>>>>>> On Mon, Mar 27, 2017 at 8:25 PM, Ted Yu >> wrote: >>>>>>> Again, attachment didn't come thru. >>>>>>>> Is it possible to formulate as google doc ? >>>>>>>> >>>>>>>> Thanks >>>>>>>> >>>>>>>> On Mon, Mar 27, 2017 at 6:19 PM, 杨苏立 Yang Su Li< >> yangsuli@gmail.com> >>>>>>>> wrote: >>>>>>>> >>>>>>>> Hi, >>>>>>>>> I am a graduate student working on scheduling on storage systems, >>>> and we >>>>>>>>> are interested in how different threads in HBase interact with >> each >>>>>>>>> other >>>>>>>>> and how it might affect scheduling. >>>>>>>>> >>>>>>>>> I have written down my understanding on how HBase/HDFS works >> based >>> on >>>>>>>>> its >>>>>>>>> current thread architecture (attached). I am wondering if the >>>> developers >>>>>>>> of >>>>>>>> >>>>>>>>> HBase could take a look at it and let me know if anything is >>>> incorrect >>>>>>>>> or >>>>>>>>> inaccurate, or if I have missed anything. >>>>>>>>> >>>>>>>>> Thanks a lot for your help! >>>>>>>>> >>>>>>>>> On Wed, Mar 22, 2017 at 3:39 PM, 杨苏立 Yang Su Li< >> yangsuli@gmail.com >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>> Hi, >>>>>>>>>> I am a graduate student working on scheduling on storage >> systems, >>>> and >>>>>>>>>> we >>>>>>>>>> are interested in how different threads in HBase interact with >>> each >>>>>>>>> other >>>>>>>>> and how it might affect scheduling. >>>>>>>>>> I have written down my understanding on how HBase/HDFS works >> based >>>> on >>>>>>>>> its >>>>>>>>> current thread architecture (attached). I am wondering if the >>>>>>>>> developers of >>>>>>>>> HBase could take a look at it and let me know if anything is >>>> incorrect >>>>>>>>> or >>>>>>>>> inaccurate, or if I have missed anything. >>>>>>>>>> Thanks a lot for your help! >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Suli Yang >>>>>>>>>> >>>>>>>>>> Department of Physics >>>>>>>>>> University of Wisconsin Madison >>>>>>>>>> >>>>>>>>>> 4257 Chamberlin Hall >>>>>>>>>> Madison WI 53703 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> -- >>>>>>>>> Suli Yang >>>>>>>>> >>>>>>>>> Department of Physics >>>>>>>>> University of Wisconsin Madison >>>>>>>>> >>>>>>>>> 4257 Chamberlin Hall >>>>>>>>> Madison WI 53703 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>> >>>>>>> >>>>> >>>>> -- >>>>> Suli Yang >>>>> >>>>> Department of Physics >>>>> University of Wisconsin Madison >>>>> >>>>> 4257 Chamberlin Hall >>>>> Madison WI 53703 >>> >>> >>> -- >>> Suli Yang >>> >>> Department of Physics >>> University of Wisconsin Madison >>> >>> 4257 Chamberlin Hall >>> Madison WI 53703 >>> > > >