Return-Path: Delivered-To: apmail-couchdb-user-archive@www.apache.org Received: (qmail 15151 invoked from network); 13 Feb 2009 11:48:09 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 13 Feb 2009 11:48:09 -0000 Received: (qmail 73086 invoked by uid 500); 13 Feb 2009 11:48:06 -0000 Delivered-To: apmail-couchdb-user-archive@couchdb.apache.org Received: (qmail 73058 invoked by uid 500); 13 Feb 2009 11:48:05 -0000 Mailing-List: contact user-help@couchdb.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@couchdb.apache.org Delivered-To: mailing list user@couchdb.apache.org Received: (qmail 73047 invoked by uid 99); 13 Feb 2009 11:48:05 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 Feb 2009 03:48:05 -0800 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of paul.joseph.davis@gmail.com designates 74.125.46.30 as permitted sender) Received: from [74.125.46.30] (HELO yw-out-2324.google.com) (74.125.46.30) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 Feb 2009 11:47:57 +0000 Received: by yw-out-2324.google.com with SMTP id 2so566454ywt.5 for ; Fri, 13 Feb 2009 03:47:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=yXstDbMsov3Zt+LlQ++wnrJEVqVJqqT5NAo2RKg0zro=; b=tOBbc7CXRHyZ1C5/PbZznaK0hvRMGJQr8aB6OngcD2nrTjyIAE5XUcwQbBvNJNP4nL psMe0qsJGvnONb91cGgNhf7wdwbfB5QVdfOeSu2agsOImyLrvNo38Plc0uKl1ntJ4MhB GcgSOa7lh09BlaEz37ni7bECTzdnMb/HC46ZA= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=fM4FaTEQ0AwDpa9YV7A8Kxh0dTBYpxZ267gKptGk8oISav52Su2LxPlfA6qNSLCcjU eeaZ9/dz9E9wRmtfHfKbOw3qvOTX3WZO0f0XUUGfg6njfj/mX79p+xuNKQUUnTAOPrBz B4OaFVKc3FlkEJhf/ObwQF2RfAVYUFCGc6xs0= MIME-Version: 1.0 Received: by 10.101.70.14 with SMTP id x14mr2579719ank.153.1234525656288; Fri, 13 Feb 2009 03:47:36 -0800 (PST) In-Reply-To: <12E12F65-1E36-4BF6-A3D7-868002B13084@cisco.com> References: <7528BFCF-2533-4A6F-AE60-C6968E219F88@cisco.com> <19C1C5D0-DC35-4FB0-B582-D40E3F7AFA75@blit.com> <12E12F65-1E36-4BF6-A3D7-868002B13084@cisco.com> Date: Fri, 13 Feb 2009 06:47:36 -0500 Message-ID: Subject: Re: [user] Re: reserving resources in Couch From: Paul Davis To: user@couchdb.apache.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org On Fri, Feb 13, 2009 at 5:59 AM, Wout Mertens wrote: > Oh wow, I completely missed that functionality :-) > > Actually, I think you mean that I should rewrite the resource documents, > since they are being locked. Let's look at the sequence: > > CouchDB: C > App instances: A and B > Resource: R_0 (rev 0 unused), R_1_A (rev 1, resource reserved by A), R_1_B > (rev 1, resource reserved by B) > > Time 0: > A reads R_0 from C > B reads R_0 from C > > Time 1: > A writes R_1_A to C > B writes R_1_B to C > > Time 2: > A gets failure from C => A knows it didn't reserve R > B gets success from C => B has the resource reserved > > Is that correct? Man that's easy :) > > Can I count on this always being true for a single-node CouchDB? > What about a replicating CouchDB cloud where competing instances (A and B) > connect to the same CouchDB? > > And, just out of interest, what would be a good way to do this if you have > competing instances connecting to different CouchDBs in a replicating cloud? > I think you'd have to make replication a part of the reservation process, > right? > > Wout. > I think the traditional method would be to nominate a write node for each document a la consistent hashing or some other scheme. Then no matter where you read a document from, you're guaranteed to have conflict resolution at write time, Also, remember, that people don't expect perfection like software engineers. If once a year, I go to book a conference room, and a few minutes later I get an email that says, "ooops, scheduling conflict, can you reschedule?" I wouldn't care so much. As long as it can be fixed quickly and easily, no one much minds assuming its a low frequency event. Granted, that whole argument is invalid for things like the email thread. :) HTH, Paul Davis > On Feb 12, 2009, at 8:56 PM, Troy Kruthoff wrote: > >> If I understand you correctly, what you need is already baked in with >> revision #'s. >> >> 1) Get a doc that is not assigned a resource >> 2) Flag the doc as being in-use and then save it. >> 2a) If the save fails because of conflict, you can then verify the new >> rev is in use and forget about it >> 2b) If save is success, you know that process has secured the "in-use" >> lock >> >> -- troy >> >> >> >> On Feb 12, 2009, at 9:11 AM, Wout Mertens wrote: >> >>> Ok, >>> >>> (no actual code yet, I don't have time to code right now :( ) >>> >>> I have a project currently using an RDBMS and I'd like to port it to >>> CouchDB. One of the things I do is lock a table, choose a free resource from >>> a query on a static table and the session list, assign the resource to a new >>> session and unlock the table. >>> >>> How would I be able to do the same thing with CouchDB given that 2 >>> sessions could start at the same time? I do have the advantage that >>> simultaneous starters would contact the same CouchDB instance. >>> >>> I was thinking of using sums: make a view that calculates the sum of >>> resources. A resource record would count as +1 and an in-use record would be >>> -1. >>> >>> Then when you reserve a resource, you save the in-use record. After >>> saving, look up the sum for the resource you reserved. If it's not equal to >>> 0, then use a stable algorithm to determine who has to release the resource >>> again. >>> >>> Would this close the race condition? Note that no documents are >>> overwritten at reservation time, each reservation doubles as the event log. >>> When the session clears up, the document that represents it is updated to >>> release the resource. >>> >>> Does this work? Is there a better way to do it? >>> >>> Thanks, >>> >>> Wout. > >