couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tim Tisdall <>
Subject Re: reducing db size
Date Mon, 14 May 2012 20:01:25 GMT
Okay, I see that you can tell that it's running by doing a GET on the
database in question and looking for the "compact_running":true .  However,
I don't seem to see any changes in the db's file size.

On Mon, May 14, 2012 at 3:42 PM, Tim Tisdall <> wrote:

> Yes, I did it with a PUT for each id.  When you call for compaction, is
> there a way to see the progress or a way to know if it's done?
> On Mon, May 14, 2012 at 3:20 PM, Paul Davis <>wrote:
>> How did you insert them? If you did a PUT per docid you'll still want
>> to compact afterwards.
>> On Mon, May 14, 2012 at 2:13 PM, Tim Tisdall <> wrote:
>> > I've got several gigabytes of data that I'm trying to store in a
>> couchdb on
>> > a single machine.  I've placed a section of the data in an sqlite db and
>> > the file is about 5.9gb.  I'm currently placing the same data into
>> couchdb
>> > and while it hasn't finished yet, the file size is already 10gb and
>> > continuing to grow.  The sqlite database is essentially a table of ids
>> with
>> > a json block of text for each, so I figured the couchdb wouldn't be too
>> > much different in size.
>> >
>> > Does anyone have some recommendations on how to reduce the size of the
>> db?
>> >  Right now I've only inserted data and have not made any "updates" to
>> > documents, so there should be no revision copies to be cleared away.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message