couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Davis <>
Subject Re: dropping revision records
Date Thu, 06 Sep 2012 23:41:39 GMT
Its a PUT to /dbname/_revs_limit IIRC. Not sure if there's a wiki page or not.

Also, it limits the number of _revisions per leaf in a revision tree.
There's a lot of subtlety in there so you'll definitely want to fully
understand revision trees and conflict resolution before you start
going too small on that.

On Thu, Sep 6, 2012 at 4:30 PM, Tim Tisdall <> wrote:
> Are there docs in the wiki on how _rev_limit works?  I tried looking
> for documentation with Google, but couldn't find any.  Where exactly
> is that set, and does that affect the total number of revisions across
> the database or the total per document?
> On Thu, Sep 6, 2012 at 4:35 PM, Robert Newson <> wrote:
>> I think Tim was clear that this was post-compaction, though. The 2gb is the historic
_rev values for all these documents.
>> You could lower _rev_limit and recompact to flush these out.
>> B.
>> On 6 Sep 2012, at 20:46, Benoit Chesneau wrote:
>>> On Thu, Sep 6, 2012 at 4:18 PM, Tim Tisdall <> wrote:
>>>> I had a database of about 10.8gb with almost 15 million records which
>>>> was fully compacted.  I had to back it up by dumping all the JSON and
>>>> then restoring it by inserting it back in.  After it was done and I
>>>> compacted it the database was now only 8.8gb!  I shed 2gb because of
>>>> dropping the revision stubs still in the database.  This is likely
>>>> because each record had about 6 revisions (so around 90 million
>>>> stubs).  All of this is understandable, but 2gb isn't really
>>>> negligible when running on a virtualized instance of 35gb.  The
>>>> problem, though, is the method I used to dump to JSON and place it
>>>> back into couchdb took almost 12hrs!
>>>> Is there a way to drop all of the revision stubs and reset the
>>>> document's revision tags back to "1-" values?  I know this would
>>>> completely break any kind of replication, but in this instance I am
>>>> not doing any.
>>> It would break more than the replication. Compacting your database is
>>> the solution.
>>> - benoƮt

View raw message