Return-Path: Delivered-To: apmail-couchdb-user-archive@www.apache.org Received: (qmail 81329 invoked from network); 26 May 2010 22:04:25 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 26 May 2010 22:04:25 -0000 Received: (qmail 19972 invoked by uid 500); 26 May 2010 22:04:24 -0000 Delivered-To: apmail-couchdb-user-archive@couchdb.apache.org Received: (qmail 19938 invoked by uid 500); 26 May 2010 22:04:24 -0000 Mailing-List: contact user-help@couchdb.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@couchdb.apache.org Delivered-To: mailing list user@couchdb.apache.org Received: (qmail 19930 invoked by uid 99); 26 May 2010 22:04:24 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 26 May 2010 22:04:24 +0000 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests=AWL,FREEMAIL_FROM,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of robert.newson@gmail.com designates 74.125.82.52 as permitted sender) Received: from [74.125.82.52] (HELO mail-ww0-f52.google.com) (74.125.82.52) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 26 May 2010 22:04:18 +0000 Received: by wwg30 with SMTP id 30so403551wwg.11 for ; Wed, 26 May 2010 15:03:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=Fpt/5nfzCtC5MB59+05sUualI+J3EJh8e+85YMxJnys=; b=d/ovOf5mOoJHyONXFYRMYnKb9X6joq2SHLNGLI68RB5Qun5d3W65goKO0rk7f8lXpb /yElYCF04xJR6d7w9o249cK0VFbBVlN+580fnxZ49oUEpeyYJNR5lrtfLFzdj4XncsuU M9QGo1k+cbsuGce2evXrs02APB9L1UXyBicSk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=l8F3Kgr0Nv7Xj3rwBkU406qq8+yKZtFnnzh8Xiy+A19wO8wKe0vJpfwkuT6wuCNmm2 MtHAyAikUFkL5itIgojsnwfN2D8ZfRrhex7BCl3D+Fm8DGos1n+M/p0FfITsgmkLz/WP tVQxkYqRCl36Q/EG8bCPpnAvSbPqb+BFC/UiI= MIME-Version: 1.0 Received: by 10.216.176.134 with SMTP id b6mr227927wem.24.1274911437086; Wed, 26 May 2010 15:03:57 -0700 (PDT) Received: by 10.216.27.201 with HTTP; Wed, 26 May 2010 15:03:56 -0700 (PDT) In-Reply-To: References: Date: Wed, 26 May 2010 23:03:56 +0100 Message-ID: Subject: Re: Re: Re: Re: Newbie question: compaction and mvcc consistency? From: Robert Newson To: user@couchdb.apache.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable I'm sorry if I wasn't clear. I was listing all the reasons why the patch has not been applied. B. On Wed, May 26, 2010 at 10:46 PM, Markus Jelsma wrote: > It's a good question. I wrote the patch because I saw the problem and > it concerned me. Since it's difficult to induce the problem, and the > patch is not subtle in its actions, it has not been committed to the > project (this was, I think, my first couchdb patch). > > It remains theoretically possible but given the difficulty of inducing > it it's not being addressed yet. > > But is it addressed in .10? If so, how? > Storing writes in RAM would violate the durability semantics of > couchdb and would mean you would have to be more careful during > compaction. > > Of course, a loss of power would not flush a RAM buffer to disk. > Clients shouldn't need to know or care compaction, which > is just a system maintenance tasks. > > > Obviously it must be transparent to clients, it would spoil the fun =3D) > B. > > On Wed, May 26, 2010 at 10:19 PM, Markus Jelsma > wrote: >> How is it that you couldn't reproduce the scenario with .10 and onwards?= The patch you supplied for that JIRA ticket you mention in the other post = doesn't seem to be incorporated in .10 at all. Are there other useful count= er measures in .10? >> >> >> >> Also, on the subject of your ticket and especially Adam's comment to it,= would storing incoming writes during the wait in a RAM buffer help to allo= w for writes during a compaction that can't cope with the amount of writes? >> >> -----Original message----- >> From: Robert Newson >> Sent: Wed 26-05-2010 22:56 >> To: user@couchdb.apache.org; >> Subject: Re: Re: Newbie question: compaction and mvcc consistency? >> >> I succeeding in preventing compaction completion back in the 0.9 days >> but I've been unable to reproduce since 0.10 onwards. compaction >> retries until it succeeds (or you hit the end of the disk). I've not >> managed to make it retry more than five times before it succeeds. >> >> B. >> >> On Wed, May 26, 2010 at 9:52 PM, Markus Jelsma wrote: >>> On the subject of a compaction that cannot deal with the magnitude of w= rites, can that (or has it already) theory be put to the test? Does anyone = know a certain setup that consists of machine specifications relative to th= e amount of writes/second? >>> >>> >>> This is a theoretical obstacle that could use some factual numbers that= could help everyone avoid it in their specific setup. I wouldn't prefer to= have such a situation in practice especially if compaction is triggered by= some process that monitors available disk space or whatever other conditio= n. >>> -----Original message----- >>> From: Randall Leeds >>> Sent: Wed 26-05-2010 22:36 >>> To: user@couchdb.apache.org; >>> Subject: Re: Newbie question: compaction and mvcc consistency? >>> >>> On Wed, May 26, 2010 at 13:29, Robert Buck wr= ote: >>>> On Wed, May 26, 2010 at 3:00 PM, Randall Leeds wrote: >>>>> The switch to the new, compacted database won't happen so long as >>>>> there are references to the old one. (r) will not disappear until (i) >>>>> is done with it. >>>> >>>> Curious, you said "switch to the new [database]". Does this imply that >>>> compaction works by creating a new database file adjacent to the old >>>> one? >>> >>> Yes. >>> >>>> >>>> If this is what you are suggesting, I have another question... I also >>>> read that compaction process may never catch up with the writes if >>>> they never let up. So along this specific train of thought, does Couch >>>> perform compaction by walking through the database in a forward-only >>>> manner? >>> >>> If I understand correctly the answer is 'yes'. Meanwhile, new writes >>> still hit the old database file as the compactor walks the old tree. >>> If there are new changes when the compactor finishes it will walk the >>> new changes starting from the root. Typically this process quickly >>> gets faster and faster on busy databases until it catches up >>> completely and the switch can be made. >>> >>> That said, you can construct an environment where compaction will >>> never finish, but I haven't seen reports of it happening in the wild. >>> >> > > >