Return-Path: X-Original-To: apmail-couchdb-dev-archive@www.apache.org Delivered-To: apmail-couchdb-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 65DA99BAD for ; Thu, 22 Dec 2011 19:50:43 +0000 (UTC) Received: (qmail 99811 invoked by uid 500); 22 Dec 2011 19:50:42 -0000 Delivered-To: apmail-couchdb-dev-archive@couchdb.apache.org Received: (qmail 99772 invoked by uid 500); 22 Dec 2011 19:50:42 -0000 Mailing-List: contact dev-help@couchdb.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@couchdb.apache.org Delivered-To: mailing list dev@couchdb.apache.org Received: (qmail 99764 invoked by uid 99); 22 Dec 2011 19:50:42 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 22 Dec 2011 19:50:42 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=5.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of paul.joseph.davis@gmail.com designates 209.85.220.180 as permitted sender) Received: from [209.85.220.180] (HELO mail-vx0-f180.google.com) (209.85.220.180) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 22 Dec 2011 19:50:35 +0000 Received: by vcbfo14 with SMTP id fo14so9250408vcb.11 for ; Thu, 22 Dec 2011 11:50:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=spNL1/Uk0xOlMR8Mb/rAwaPhBOBPv2o7qJzNjGBCnpE=; b=KNlzIo13etAFWl5AVisGLJqT2oSTLvwbo9Qa2IPAg6NOYC0o2I9Y5Ql9/ewXRZk3tQ u+thikpdljP2nmCdZ6yeWMcZWOm0qRgioJy1fduFy3JrU2X7bXOlG0UP/HMoceyfuv+9 X5LrW3FFFcMdWYSEv3d6W5sfAMJ/pkuvwGIzA= Received: by 10.220.147.195 with SMTP id m3mr8421277vcv.6.1324583414233; Thu, 22 Dec 2011 11:50:14 -0800 (PST) MIME-Version: 1.0 Received: by 10.220.190.202 with HTTP; Thu, 22 Dec 2011 11:49:33 -0800 (PST) In-Reply-To: References: From: Paul Davis Date: Thu, 22 Dec 2011 13:49:33 -0600 Message-ID: Subject: Re: Is it possible to bring back optional old all-or-nothing behaviour? To: dev@couchdb.apache.org Content-Type: text/plain; charset=ISO-8859-1 X-Virus-Checked: Checked by ClamAV on apache.org On Thu, Dec 22, 2011 at 11:31 AM, Robert Newson wrote: > In my opinion, and I believe the majority opinion of the group, the > CouchDB API should be the same everywhere. This specifically includes > not doing things on a single box that will not work in a > clustered/sharded situation. It's why our transactions are scoped to a > single document, for example. > > I will also note that all_or_nothing does not provide multi-document > ACID transactions. The batches used in bulk_docs are not recorded, so > those items will be replicated individually (and in parallel, so not > even in a predictable order), which would break the C and I > characteristics on the receiving server. The old semantic would abort > the whole update if any one of the documents couldn't be updated but > the new semantic simply introduces a conflict in that case. > Slight nit pick, but new behavior just returns the error that the update would *cause* the conflict. (Assuming default non-replicator _bulk_docs calls.) > B. > > On 22 December 2011 16:48, Alexander Uvarov wrote: >> And can become much easier with multi-document transactions as an option. >> >> On Thu, Dec 22, 2011 at 10:43 PM, Pepijn de Vos wrote: >>> But not everyone needs a cluster. I like CouchDB because it's easy, not because "it scales", and in some situations, all_or_nothing is easy. >>> Robert mentions it in passing, but the biggest reason that we dropped the original _bulk_docs behavior doesn't have anything to do with clustering. It was because the semantics are violated as soon as you try and replicate. Since there's no tracking of the group of docs posted to _bulk_docs then as soon as your mobile client tried to move data in or out you'd lose all three of ACI in ACID. The follow up question that I had spent some time on was trying to think of a way to *add* these bulk group indicators to solve this in the replicator. As it turns out, the way our update_seq indices work is fairly at odds with this grouping. When documents are updated, they are moved to the new update_seq position. Without some major reengineering of the core of couchdb (that directly relates to replication) there isn't much that we can do here. Generally speaking, my rule of thumb is that if you find yourself wanting this feature then you're probably going to want to rethink your application's architecture. When we went through this discussion back in the 0.9 days I found myself spending a lot of time trying to think of new designs that would save it. Then I slowly realized that I just hadn't completely groked the replication/distribution model in CouchDB.