couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Randall Leeds <>
Subject Re: operational file size
Date Sat, 08 Jan 2011 20:31:49 GMT
It's hard to estimate how big the compacted database will be given the
size of the original. In the worst case (when your database is already
compacted), compacting it again will double your usage, since it
creates a whole new, optimized copy of the database file.

More likely is that the original is not compact and so the new file
will be much smaller.

Clearly, then, the answer is that if you want to be ultra safe no
single database should exceed 50% of your capacity. However, it is
safe to have many small databases such that the total disk consumption
is much higher.

The best solution is to regularly compact your databases and track the
usage and size differences so you get a good sense of how fast you're
growing. And remember, if you find yourself in a sticky situation
where you can't compact you probably still have plenty of time to
replicate to a bigger machine or a hosted cluster such as offered by
Cloudant. Good monitoring is the best way to avoid disaster.

On Sat, Jan 8, 2011 at 10:39, Jeffrey M. Barber <> wrote:
> If I'm running CouchDB with 100GB of disk space, what is the maximum CouchDB
> database size such that I'm still able to optimize?
> I remember running out of room on a rackspace machine, and I got the
> strangest of error codes when trying to run CouchDB.
> -J

View raw message