Return-Path: Delivered-To: apmail-hadoop-zookeeper-user-archive@minotaur.apache.org Received: (qmail 33982 invoked from network); 20 May 2010 23:29:55 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 20 May 2010 23:29:55 -0000 Received: (qmail 25112 invoked by uid 500); 20 May 2010 23:29:54 -0000 Delivered-To: apmail-hadoop-zookeeper-user-archive@hadoop.apache.org Received: (qmail 25062 invoked by uid 500); 20 May 2010 23:29:54 -0000 Mailing-List: contact zookeeper-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: zookeeper-user@hadoop.apache.org Delivered-To: mailing list zookeeper-user@hadoop.apache.org Received: (qmail 25053 invoked by uid 99); 20 May 2010 23:29:54 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 20 May 2010 23:29:54 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=AWL,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [143.106.16.160] (HELO floquinho.lab.ic.unicamp.br) (143.106.16.160) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 20 May 2010 23:29:48 +0000 Received: from webmail.students.ic.unicamp.br (localhost.localdomain [127.0.0.1]) by floquinho.lab.ic.unicamp.br (Postfix) with ESMTP id E87792E5A17 for ; Thu, 20 May 2010 20:29:25 -0300 (BRT) Received: from 201.82.161.105 (SquirrelMail authenticated user ra078686) by webmail.students.ic.unicamp.br with HTTP; Thu, 20 May 2010 20:29:26 -0300 (BRT) Message-ID: In-Reply-To: <6B498E08-84A9-4332-9674-E1FD4C1C318E@yahoo-inc.com> References: <8e8b64e64c0f65e38b9ceb361d01c5d2.squirrel@webmail.students.ic.unicamp.br> <36ce35e86200075f15df5d852b3d718a.squirrel@webmail.students.ic.unicamp.br> <6B498E08-84A9-4332-9674-E1FD4C1C318E@yahoo-inc.com> Date: Thu, 20 May 2010 20:29:26 -0300 (BRT) Subject: Re: Concurrent reads and writes on BookKeeper From: =?iso-8859-1?Q?Andr=E9_Oriani?= To: zookeeper-user@hadoop.apache.org User-Agent: SquirrelMail/1.4.15 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal Well Flavio, it is a extremely simple prototype where a primary broadcast updates on a single integer to backups. So we gonna have (n-1) reads for every write in a cluster of size n. I think sequential nodes in Zookeeper are fine for now, But I don't know if I am going to review that if things begin to get more complex. Tks a lot, Andr� Oriani > Hi Andre, To guarantee that two clients that read from a ledger will > read the same sequence of entries, we need to make sure that there is > agreement on the end of the sequence. A client is still able to read > from an open ledger, though. We have an open jira about informing > clients of the progress of an open ledger (ZOOKEEPER-462), but we > haven't reached agreement on it yet. Some folks think that it is best > that each application use the mechanism it finds best. One option is > to have the writer writing periodically to a ZooKeeper znode to inform > of its progress. > > I would need to know more detail of your application before > recommending you to stick with BookKeeper or switch to ZooKeeper. If > your workload is dominated by writes, then BookKeeper might be a > better option. > > -Flavio > > On May 19, 2010, at 1:29 AM, Andr� Oriani wrote: > >> Sorry, I forgot the subject on my last message :| >> >> Hi all, >> I was considering BookKeeper to implement some server replicated >> application having one primary server as writer and many backup >> servers >> reading from BookKeeper concurrently. The last documentation a I had >> access says "This writer has to execute a close ledger operation >> before >> any other client can read from it." So readers cannot ready any >> entry on >> the ledger, even the already committed ones until writer stops >> writing to >> the ledger,i.e, closes it. Is my understanding right ? Should I >> then use >> Zookeeper directly to achieve what I want ? >> >> >> Thanks for the attention, >> Andr� Oriani >> >> >> >> >> >> > >