Return-Path: Delivered-To: apmail-hbase-dev-archive@www.apache.org Received: (qmail 29392 invoked from network); 20 Aug 2010 22:58:50 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 20 Aug 2010 22:58:50 -0000 Received: (qmail 9005 invoked by uid 500); 20 Aug 2010 22:58:50 -0000 Delivered-To: apmail-hbase-dev-archive@hbase.apache.org Received: (qmail 8959 invoked by uid 500); 20 Aug 2010 22:58:49 -0000 Mailing-List: contact dev-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hbase.apache.org Delivered-To: mailing list dev@hbase.apache.org Received: (qmail 8951 invoked by uid 99); 20 Aug 2010 22:58:49 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 20 Aug 2010 22:58:49 +0000 X-ASF-Spam-Status: No, hits=1.8 required=10.0 tests=FH_HELO_EQ_D_D_D_D,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: 184.73.217.71 is neither permitted nor denied by domain of stack@duboce.net) Received: from [184.73.217.71] (HELO ip-10-202-7-187.ec2.internal) (184.73.217.71) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 20 Aug 2010 22:58:44 +0000 Received: from ip-10-202-7-187.ec2.internal (localhost [127.0.0.1]) by ip-10-202-7-187.ec2.internal (Postfix) with ESMTP id A20098A1EA; Fri, 20 Aug 2010 22:58:23 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Subject: Re: Review Request: HBASE-2915 Deadlock between HRegion.ICV and HRegion.close From: stack@duboce.net To: "Jean-Daniel Cryans" , jiraposter@review.hbase.org, dev@hbase.apache.org, stack@duboce.net Date: Fri, 20 Aug 2010 22:58:23 -0000 Message-ID: <20100820225823.27448.17397@ip-10-202-7-187.ec2.internal> In-Reply-To: <20100819215941.25772.68776@ip-10-202-7-187.ec2.internal> References: <20100819215941.25772.68776@ip-10-202-7-187.ec2.internal> ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: http://review.cloudera.org/r/691/#review975 ----------------------------------------------------------- oh, one other thing, in discussions, we talked of no longer needing to wait= on row locks to expire... I don't see this being excised from the close me= thod. Should that be in here? - stack On 2010-08-19 14:59:41, Jean-Daniel Cryans wrote: > = > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > http://review.cloudera.org/r/691/ > ----------------------------------------------------------- > = > (Updated 2010-08-19 14:59:41) > = > = > Review request for hbase. > = > = > Summary > ------- > = > This patch removes newScannerLock and renames splitAndClose lock to just = "lock". Every operation is now required to obtain the read lock on "lock" b= efore doing anything (including getting a row lock). This is done by callin= g openRegionTransaction inside a try statement and by calling closeRegionTr= ansaction in finally. > = > flushcache got refactored some more in order to do the locking in the pro= per order; first get the read lock, then do the writestate handling. > = > Finally, it removes the need to have a writeLock when flushing when subcl= assers give atomic work do to via internalPreFlushcacheCommit. This means t= hat this patch breaks external contribs. This is required to keep our whole= locking mechanism simpler. > = > = > This addresses bug HBASE-2915. > http://issues.apache.org/jira/browse/HBASE-2915 > = > = > Diffs > ----- > = > /trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java = 987300 = > /trunk/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransact= ion.java 987300 = > /trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTran= saction.java 987300 = > = > Diff: http://review.cloudera.org/r/691/diff > = > = > Testing > ------- > = > 5 concurrent ICV threads + randomWrite 3 + scans on a single RS. I'm also= in the process of deploying it on a cluster. > = > = > Thanks, > = > Jean-Daniel > = >