Return-Path: X-Original-To: apmail-couchdb-user-archive@www.apache.org Delivered-To: apmail-couchdb-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C0A8A930E for ; Wed, 18 Apr 2012 16:40:29 +0000 (UTC) Received: (qmail 3089 invoked by uid 500); 18 Apr 2012 16:40:28 -0000 Delivered-To: apmail-couchdb-user-archive@couchdb.apache.org Received: (qmail 2972 invoked by uid 500); 18 Apr 2012 16:40:28 -0000 Mailing-List: contact user-help@couchdb.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@couchdb.apache.org Delivered-To: mailing list user@couchdb.apache.org Received: (qmail 2963 invoked by uid 99); 18 Apr 2012 16:40:28 -0000 Received: from minotaur.apache.org (HELO minotaur.apache.org) (140.211.11.9) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Apr 2012 16:40:28 +0000 Received: from localhost (HELO mail-iy0-f180.google.com) (127.0.0.1) (smtp-auth username rnewson, mechanism plain) by minotaur.apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Apr 2012 16:40:28 +0000 Received: by iage36 with SMTP id e36so14562694iag.11 for ; Wed, 18 Apr 2012 09:40:27 -0700 (PDT) MIME-Version: 1.0 Received: by 10.50.222.162 with SMTP id qn2mr2737086igc.65.1334767227415; Wed, 18 Apr 2012 09:40:27 -0700 (PDT) Received: by 10.42.240.135 with HTTP; Wed, 18 Apr 2012 09:40:26 -0700 (PDT) In-Reply-To: References: Date: Wed, 18 Apr 2012 17:40:26 +0100 Message-ID: Subject: Re: view_duplicated_id on CouchDB 1.2 From: Robert Newson To: user@couchdb.apache.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Stefan, Dupes are only logged if found when compacting the view, not the database. = :( B. On 18 April 2012 15:34, Stefan K=F6gl wrote: > On Wed, Apr 18, 2012 at 11:31 AM, Stefan K=F6gl w= rote: >> On Wed, Apr 18, 2012 at 10:37 AM, Robert Newson wro= te: >>> The first step is to ensure that your database does not contain >>> duplicates. To do this, you need to compact it in a special way. >>> Create a 0 byte dbname.compact file in the same place that the normal >>> .compact file would be, and then compact. This will force the retry >>> mode of the compactor which will more aggressively check for >>> duplicates. >> >> Thanks for the suggestion -- I'll try that! I assume I would run into >> the same problems again if a re-build the view from a db with >> duplicates. > > The aggressive db compaction is running now (for ~5 hours already), > and still at 1% progress, so it seems it does its job very thorough ;) > Does the db compaction log if it detects and removes any duplicates? > > -- Stefan