couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Newson <>
Subject Re: CouchDB handling extreme loads
Date Mon, 29 Apr 2013 11:40:12 GMT
You'd be much better off backing up by reading

Reading _all_docs and then fetching each document should work fine,
it'll just be much slower (and non-incremental, you'll have to start
from scratch every time you backup).

Does your log include any crash information?


On 29 April 2013 11:05, Michael Zedeler. <> wrote:
> Hi.
> I have found a way to write a backup script using an event driven
> environment.
> For starters, I have just used the naïve approach to get all document ids
> and then fetch one at a time.
> This works on small databases, but for obvious reasons, the load becomes too
> big on larger databases, since my script is essentially trying to fetch too
> many documents at the same time.
> I know that I have to throttle the requests, but it turned out that CouchDB
> doesn't handle the load gracefully. At some point, I just get a "Apache
> CouchDB starting" entry in the log and at the same time I can see that at
> least one of the running requests are closed before CouchDB has returned
> anything.
> Is this behaviour intentional? How do I send as many requests as possible
> without causing the server to restart?
> I'd definately prefer if the server could just start responding more slowly.
> I am using CouchDB 1.2 (and perls AnyEvent::CouchDB on the client - I gave
> up on nano).
> Regards,
> Michael.

View raw message