Return-Path: Delivered-To: apmail-hadoop-zookeeper-user-archive@minotaur.apache.org Received: (qmail 66279 invoked from network); 12 May 2010 18:17:45 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 12 May 2010 18:17:45 -0000 Received: (qmail 94965 invoked by uid 500); 12 May 2010 18:17:45 -0000 Delivered-To: apmail-hadoop-zookeeper-user-archive@hadoop.apache.org Received: (qmail 94903 invoked by uid 500); 12 May 2010 18:17:45 -0000 Mailing-List: contact zookeeper-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: zookeeper-user@hadoop.apache.org Delivered-To: mailing list zookeeper-user@hadoop.apache.org Received: (qmail 94890 invoked by uid 99); 12 May 2010 18:17:45 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 12 May 2010 18:17:45 +0000 X-ASF-Spam-Status: No, hits=2.1 required=10.0 tests=AWL,FREEMAIL_FROM,HTML_MESSAGE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of thedwilliams@googlemail.com designates 209.85.161.48 as permitted sender) Received: from [209.85.161.48] (HELO mail-fx0-f48.google.com) (209.85.161.48) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 12 May 2010 18:17:40 +0000 Received: by fxm10 with SMTP id 10so374357fxm.35 for ; Wed, 12 May 2010 11:17:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type; bh=SX0pBM6MMLRQMvUs6gmjRavUuV3ez9z7Ecpnnu13gng=; b=YKQDntPAdFthevXJy6fKevAJEspR3biQmHMOjYrTRc1oDMq/omsW74cfFp8bj7AoGP w8e3PLVKPn8oWq/BDMp1woysQpTDKfJOSIANSz0zz2AA2ledMonBzdBnFe3O9BLmvGhv RyzcAaTHy/hCKLgwKmMgZCcv4qsMNGqPuWVOw= DomainKey-Signature: a=rsa-sha1; c=nofws; d=googlemail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=d+uaygyWpTu5LVjqZN0+/Bu56jTxyMU+suf9JNgdB+A5sx6gjrJEU/p+CKEyvDmlco Hhckm2h87GVeXzPeToJ1ch3M6TX5XAX6wtOHIhwuLdC4WfL6TCR+cuRE1DjJymDAapET HhTwGW0keTOi+kCcNUz7VhJ7MiAAEW0CI8YoI= MIME-Version: 1.0 Received: by 10.239.191.133 with SMTP id b5mr488779hbi.173.1273688237212; Wed, 12 May 2010 11:17:17 -0700 (PDT) Received: by 10.239.160.129 with HTTP; Wed, 12 May 2010 11:17:17 -0700 (PDT) In-Reply-To: <4BE9ED76.4050201@apache.org> References: <4BE9ED76.4050201@apache.org> Date: Wed, 12 May 2010 19:17:17 +0100 Message-ID: Subject: Re: New ZooKeeper client library "Cages" From: Dominic Williams To: Patrick Hunt Cc: zookeeper-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001485f87fdcef0310048669a656 --001485f87fdcef0310048669a656 Content-Type: text/plain; charset=ISO-8859-1 Hi Patrick, Internally, ZkMultiLock constructs single path ZkReadLock and ZkWriteLock objects to handle the lock paths you add to it. These work in a similar way to that described in the ZooKeeper recipes. If you only add a single lock path to ZkMultiLock, then when you try and acquire() it behaves exactly like a ZkReadLock or ZkWriteLock. However, if you add multiple paths, then it proceeds differently. In this case, it constructs an appropriate array of single path lock objects for each one, and calls tryAcquire() on each. If any of these locks fail to acquire because they are already held, then it calls release() on all locks in the array (one aspect of the work on ZkReadLock and ZkWriteLock was enabling them to accept calls to release() before they reach an acquired state). Having released all the locks, ZkMultiLock then waits for a delay determined by a Binary Exponential Backoff style algorithm, constructs a new array of equivalent single path lock objects and calls tryAcquire() on each. This goes on ad infinitum, until all the single path locks are acquired with a single pass over the array. The advantage of this approach is that if you have an operation that requires some number of locks, if all these locks are acquired together using ZkMultiLock you cannot get into a deadlock situation. Where lock paths are heavily contended this can be less efficient than using nested single path locks, but in practice most lock paths aren't that contended and you just need to guard against that occasional contention that would otherwise mess your data up. For that reason, certainly I am asking everyone to stick to ZkMultiLock in our work - there's nothing worse than distributed deadlock! Best, Dominic On 12 May 2010 00:51, Patrick Hunt wrote: > Hi Dominic, this looks really interesting thanks for open sourcing it. I > really like the idea of providing higher level concepts. I only just looked > at the code, it wasn't clear on first pass what happens if you multilock on > 3 paths, the first 2 are success, but the third fails. How are the locks > cleared? How about the case where the client loses connectivity to the > cluster, what happens in this case (both if partial locks are acquired, and > the case where all the locks were acquired (for example how does the caller > know if the locks are still held or released due to client partitioned from > the cluster, etc...)). > > I'll try d/l the code and looking at it more, I see some javadoc in there > as well so that's great. > > Regards, > > Patrick > > > On 05/11/2010 04:02 PM, Dominic Williams wrote: > >> Anyone looking for a Java client library for ZooKeeper, please checkout: >> >> Cages - http://cages.googlecode.com >> >> The library will be expanded and feedback will be helpful. >> >> Many thanks, >> Dominic >> ria101.wordpress.com >> >> --001485f87fdcef0310048669a656--