incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Newson <>
Subject Re: dropping revision records
Date Thu, 06 Sep 2012 20:35:14 GMT

I think Tim was clear that this was post-compaction, though. The 2gb is the historic _rev
values for all these documents.

You could lower _rev_limit and recompact to flush these out.


On 6 Sep 2012, at 20:46, Benoit Chesneau wrote:

> On Thu, Sep 6, 2012 at 4:18 PM, Tim Tisdall <> wrote:
>> I had a database of about 10.8gb with almost 15 million records which
>> was fully compacted.  I had to back it up by dumping all the JSON and
>> then restoring it by inserting it back in.  After it was done and I
>> compacted it the database was now only 8.8gb!  I shed 2gb because of
>> dropping the revision stubs still in the database.  This is likely
>> because each record had about 6 revisions (so around 90 million
>> stubs).  All of this is understandable, but 2gb isn't really
>> negligible when running on a virtualized instance of 35gb.  The
>> problem, though, is the method I used to dump to JSON and place it
>> back into couchdb took almost 12hrs!
>> Is there a way to drop all of the revision stubs and reset the
>> document's revision tags back to "1-" values?  I know this would
>> completely break any kind of replication, but in this instance I am
>> not doing any.
> It would break more than the replication. Compacting your database is
> the solution.
> - benoƮt

View raw message