couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael Zedeler." <mich...@zedeler.dk>
Subject Re: CouchDB handling extreme loads
Date Mon, 29 Apr 2013 21:17:15 GMT
Hi Robert.

Thanks for the suggestion to use the changes feed in order to do 
incremental backups. Haven't got any crash information, but will try to 
generate one and post it here.

Regards,

Michael.

On 2013-04-29 13:40, Robert Newson wrote:
> You'd be much better off backing up by reading
> /dbname/_changes?include_docs=true.
>
> Reading _all_docs and then fetching each document should work fine,
> it'll just be much slower (and non-incremental, you'll have to start
> from scratch every time you backup).
>
> Does your log include any crash information?
>
> B.
>
>
> On 29 April 2013 11:05, Michael Zedeler. <michael@zedeler.dk> wrote:
>> Hi.
>>
>> I have found a way to write a backup script using an event driven
>> environment.
>>
>> For starters, I have just used the naïve approach to get all document ids
>> and then fetch one at a time.
>>
>> This works on small databases, but for obvious reasons, the load becomes too
>> big on larger databases, since my script is essentially trying to fetch too
>> many documents at the same time.
>>
>> I know that I have to throttle the requests, but it turned out that CouchDB
>> doesn't handle the load gracefully. At some point, I just get a "Apache
>> CouchDB starting" entry in the log and at the same time I can see that at
>> least one of the running requests are closed before CouchDB has returned
>> anything.
>>
>> Is this behaviour intentional? How do I send as many requests as possible
>> without causing the server to restart?
>>
>> I'd definately prefer if the server could just start responding more slowly.
>>
>> I am using CouchDB 1.2 (and perls AnyEvent::CouchDB on the client - I gave
>> up on nano).
>>
>> Regards,
>>
>> Michael.
>>


Mime
View raw message