Return-Path: X-Original-To: apmail-couchdb-user-archive@www.apache.org Delivered-To: apmail-couchdb-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id DB662F3E2 for ; Mon, 29 Apr 2013 21:52:11 +0000 (UTC) Received: (qmail 26288 invoked by uid 500); 29 Apr 2013 21:52:10 -0000 Delivered-To: apmail-couchdb-user-archive@couchdb.apache.org Received: (qmail 26257 invoked by uid 500); 29 Apr 2013 21:52:10 -0000 Mailing-List: contact user-help@couchdb.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@couchdb.apache.org Delivered-To: mailing list user@couchdb.apache.org Received: (qmail 26247 invoked by uid 99); 29 Apr 2013 21:52:10 -0000 Received: from minotaur.apache.org (HELO minotaur.apache.org) (140.211.11.9) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 29 Apr 2013 21:52:10 +0000 Received: from localhost (HELO mail-la0-f46.google.com) (127.0.0.1) (smtp-auth username rnewson, mechanism plain) by minotaur.apache.org (qpsmtpd/0.29) with ESMTP; Mon, 29 Apr 2013 21:52:09 +0000 Received: by mail-la0-f46.google.com with SMTP id fs13so3248475lab.33 for ; Mon, 29 Apr 2013 14:52:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type:content-transfer-encoding; bh=A6pSgbr2SZ65JdG19bomQgMqu/vGQgjYtJooOJfSxuk=; b=jfSH3dYAMnup44E1QeysGN67y/V6PkhqHwyUN/V0knm/OJaI2gopyRtjgASyYSgtnB n6UK0I+LTID9NCLmj6o+6ZiMvBUHuaw3/bOupr3Ejinyfv2Pwtb34EFpYBcM33RjjJ6Z XSSr9R+Tj8mmBBt01yqosKyfH36C1b3X9xkoGWB0YHZFYo7hxrNm5aCJotsY4wBDqhEO 64kLXNv64KtpE39Jc0g/qwABjlu5leop+VJbEln0EuCAgpUiy5LAqZYJAj2hidfC0Rbb Ylxv9GA7cRAbd7mum+akbWA2lHqY3sC/KV4u2At0jJnRscToHWoKcxIeqwsweNFVNIiO 6qLw== MIME-Version: 1.0 X-Received: by 10.112.199.230 with SMTP id jn6mr27691205lbc.131.1367272327959; Mon, 29 Apr 2013 14:52:07 -0700 (PDT) Received: by 10.112.210.66 with HTTP; Mon, 29 Apr 2013 14:52:07 -0700 (PDT) In-Reply-To: <517EEA6F.3030707@zedeler.dk> References: <517E45ED.90308@zedeler.dk> <517EE35B.2070809@zedeler.dk> <517EEA6F.3030707@zedeler.dk> Date: Mon, 29 Apr 2013 22:52:07 +0100 Message-ID: Subject: Re: CouchDB handling extreme loads From: Robert Newson To: "user@couchdb.apache.org" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Aha! Thanks for the update. On 29 April 2013 22:47, Michael Zedeler. wrote: > Hi Robert. > > (Again.) > > The cause has been found: the server ran out of memory due to a memory le= ak > in my script. > > Regards, > > Michael. > > > On 2013-04-29 23:17, Michael Zedeler. wrote: >> >> Hi Robert. >> >> Thanks for the suggestion to use the changes feed in order to do >> incremental backups. Haven't got any crash information, but will try to >> generate one and post it here. >> >> Regards, >> >> Michael. >> >> On 2013-04-29 13:40, Robert Newson wrote: >>> >>> You'd be much better off backing up by reading >>> /dbname/_changes?include_docs=3Dtrue. >>> >>> Reading _all_docs and then fetching each document should work fine, >>> it'll just be much slower (and non-incremental, you'll have to start >>> from scratch every time you backup). >>> >>> Does your log include any crash information? >>> >>> B. >>> >>> >>> On 29 April 2013 11:05, Michael Zedeler. wrote: >>>> >>>> Hi. >>>> >>>> I have found a way to write a backup script using an event driven >>>> environment. >>>> >>>> For starters, I have just used the na=C4=ABve approach to get all docu= ment >>>> ids >>>> and then fetch one at a time. >>>> >>>> This works on small databases, but for obvious reasons, the load becom= es >>>> too >>>> big on larger databases, since my script is essentially trying to fetc= h >>>> too >>>> many documents at the same time. >>>> >>>> I know that I have to throttle the requests, but it turned out that >>>> CouchDB >>>> doesn't handle the load gracefully. At some point, I just get a "Apach= e >>>> CouchDB starting" entry in the log and at the same time I can see that >>>> at >>>> least one of the running requests are closed before CouchDB has return= ed >>>> anything. >>>> >>>> Is this behaviour intentional? How do I send as many requests as >>>> possible >>>> without causing the server to restart? >>>> >>>> I'd definately prefer if the server could just start responding more >>>> slowly. >>>> >>>> I am using CouchDB 1.2 (and perls AnyEvent::CouchDB on the client - I >>>> gave >>>> up on nano). >>>> >>>> Regards, >>>> >>>> Michael. >>>> >> >