incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tim Tisdall <>
Subject dropping revision records
Date Thu, 06 Sep 2012 14:18:00 GMT
I had a database of about 10.8gb with almost 15 million records which
was fully compacted.  I had to back it up by dumping all the JSON and
then restoring it by inserting it back in.  After it was done and I
compacted it the database was now only 8.8gb!  I shed 2gb because of
dropping the revision stubs still in the database.  This is likely
because each record had about 6 revisions (so around 90 million
stubs).  All of this is understandable, but 2gb isn't really
negligible when running on a virtualized instance of 35gb.  The
problem, though, is the method I used to dump to JSON and place it
back into couchdb took almost 12hrs!

Is there a way to drop all of the revision stubs and reset the
document's revision tags back to "1-" values?  I know this would
completely break any kind of replication, but in this instance I am
not doing any.

The best method I can think of is to insert each record into a new DB
(not through replication, though, because that takes the stubs over
with it).  Then go through the _changes from when I started and recopy
those over to make sure everything is up-to-date.  This would save me
having things down for 12hrs, but I have no idea how slow this process
would take.



View raw message