Return-Path: Delivered-To: apmail-couchdb-user-archive@www.apache.org Received: (qmail 4217 invoked from network); 4 Oct 2009 02:59:59 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 4 Oct 2009 02:59:59 -0000 Received: (qmail 84497 invoked by uid 500); 4 Oct 2009 02:59:58 -0000 Delivered-To: apmail-couchdb-user-archive@couchdb.apache.org Received: (qmail 84425 invoked by uid 500); 4 Oct 2009 02:59:58 -0000 Mailing-List: contact user-help@couchdb.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@couchdb.apache.org Delivered-To: mailing list user@couchdb.apache.org Received: (qmail 84415 invoked by uid 99); 4 Oct 2009 02:59:58 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 04 Oct 2009 02:59:58 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of paul.joseph.davis@gmail.com designates 209.85.210.176 as permitted sender) Received: from [209.85.210.176] (HELO mail-yx0-f176.google.com) (209.85.210.176) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 04 Oct 2009 02:59:46 +0000 Received: by yxe6 with SMTP id 6so2602732yxe.13 for ; Sat, 03 Oct 2009 19:59:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=PqI+BG/n5BslqLaRag80TT6OTJi3EYS81n4+H3hcRXY=; b=rE27hMtI4O9iNG48kBswPAmmMJq1MhOZkTMeGjx2AmvjeJFYAS8Rf40v2n6DUeyh17 al/s1Mqa22u/hNrfxmY2qj8DJb+1SA17utDCGVSqGvyDdTD4sGzjgNtAwC406H1hc0Gd u3My0R5E43pV88EAGVGefQqouh0wjGqPHORm4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=XZFxBxoqdDtMsizQ2wLVE1e2/mEon86oaORjcr1NXTJqEAhJbQU7zOC1CNzm4MWAeN 5/mrdr1ATjPuTrwhTRbud9Bwlh81z42K2REafuMRfQmtHeRfTFvCHFhAxOnoUBSFrPfl Yf0fluDPSTFFfqc1lBXjbcx9nwsL0hZOH1Z8s= MIME-Version: 1.0 Received: by 10.101.86.13 with SMTP id o13mr4233464anl.184.1254625165249; Sat, 03 Oct 2009 19:59:25 -0700 (PDT) In-Reply-To: <1521F562-7878-423B-BEC3-FF2A1916DA0C@gmail.com> References: <6cfccb3d0910030910m678c9157o811a57e02ffbeff0@mail.gmail.com> <6cfccb3d0910031824q5d9362c2k7719b32a779e956f@mail.gmail.com> <1521F562-7878-423B-BEC3-FF2A1916DA0C@gmail.com> Date: Sat, 3 Oct 2009 22:59:25 -0400 Message-ID: Subject: Re: Timeout Error when trying to access views + Indexing problems From: Paul Davis To: "user@couchdb.apache.org" Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org On Sat, Oct 3, 2009 at 9:46 PM, Paul Joseph Davis wrote: > Glenn, > > This sounds like your map function is timing out which causes the error. = You > could try upping the os process timeout setting in the config. > > To see what's going on you can increase to debug logging or use the log > function in your maps. There's also the status page in futon which I thin= k > you said you were looking at. > > If indexing crashes it should just pick up where it left off when you > retrigger. Use the status page to verify. If it's not then let us know. > > If you can't find anything in the debug logs then ping the lust and we'll > get into trying to duplicate. > Phone fail. Ping the *list* rather. Paul > Paul Davis > > On Oct 3, 2009, at 9:24 PM, Glenn Rempes wrote: > >> Slightly more info on this. =A0I see the following stack trace when this >> happens: >> [Sun, 04 Oct 2009 01:18:41 GMT] [info] [<0.3343.0>] Stacktrace: >> [{gen_server,call,2}, >> =A0 =A0 =A0 =A0 =A0 =A0{couch_view,get_group_server,2}, >> =A0 =A0 =A0 =A0 =A0 =A0{couch_view,get_group,3}, >> =A0 =A0 =A0 =A0 =A0 =A0{couch_view,get_map_view,4}, >> =A0 =A0 =A0 =A0 =A0 =A0{couch_httpd_view,design_doc_view,5}, >> =A0 =A0 =A0 =A0 =A0 =A0{couch_httpd_db,do_db_req,2}, >> =A0 =A0 =A0 =A0 =A0 =A0{couch_httpd,handle_request,5}, >> =A0 =A0 =A0 =A0 =A0 =A0{mochiweb_http,headers,5}] >> >> >> And I was suspecting that perhaps it was related to low ram or cpu on th= e >> EC2 instance I am running on (with the couchdb on an EBS volume) and >> upgraded to an extra large with 15GB RAM, and four cores. >> >> No difference at all. =A0I get this error now almost instantly whenever = I >> select any of the views you see in the pastie below in the single design >> doc. >> >> Help!? =A0:-) >> >> Thanks. >> >> Glenn >> >> On Sat, Oct 3, 2009 at 9:10 AM, Glenn Rempe wrote: >> >>> Hello all, >>> I am looking for some guidance on how I can eliminate an error I am >>> seeing >>> when trying to access views, and help with getting through indexing a >>> large >>> design document. >>> >>> Yesterday I upgraded to a trunk install of CouchDB (0.11.0b) in an >>> attempt >>> to resolve my second problem (see below). I have a DB that currently ha= s >>> about 16 million records in it and I am in the midst of importing more = up >>> to >>> a total of about 26 million. =A0Yesterday when I would try to access on= e of >>> my >>> map/reduce views I would see the indexing process kick off in the Futon >>> status page and I would see the couchjs process in 'top'. =A0But today,= if >>> I >>> try to access any view I see the following error from CouchDB within >>> about 3 >>> seconds from requesting any view: >>> >>> http://pastie.org/640511 >>> >>> The first few lines of it are: >>> >>> Error: timeout{gen_server,call, >>> =A0 [couch_view, >>> =A0 =A0{get_group_server,<<"searchlight_production">>, >>> =A0 =A0 =A0 =A0{group, >>> =A0 =A0 =A0 =A0 =A0 =A0<<95,25,15,251,46,213,137,116,110,135,150,210,66= ,56,105,172>>, >>> =A0 =A0 =A0 =A0 =A0 =A0nil,nil,<<"_design/SearchDocument">>,<<"javascri= pt">>,[], >>> =A0 =A0 =A0 =A0 =A0 =A0[{view,0, >>> >>> >>> I have tried without success restarting the CouchDB several times. >>> >>> Any thoughts as to what might be happening here and how I might prevent >>> it? >>> >>> Related to this is my second problem. =A0Whenever I have tried to index= a >>> view of this large DB the indexing process seems to silently die out >>> after a >>> while and it never get through indexing the whole DB. =A0I have seen it= get >>> through 10's of thousands up to a few million docs before dying (out of >>> millions). =A0Questions: >>> >>> - Is there a recommended method to figure out what is happening in the >>> internals of the indexing that may be causing it to fail? >>> - If indexing fails before having gone through the entire result set at >>> least once does it continue where it left off at the last crash? =A0Or = does >>> it >>> need to start the whole indexing process over from scratch? >>> - How can I best ensure that my large DB gets fully indexed? >>> >>> Thank you for the help. >>> >>> Glenn >>> >>> -- >>> Glenn Rempe >>> >>> email =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 : glenn@rempe.us >>> voice =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 : (415) 894-5366 or (415)-89G-LEN= N >>> twitter =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0: @grempe >>> contact info =A0 =A0 =A0 =A0: http://www.rempe.us/contact.html >>> pgp =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0: http://www.rempe.us/gnupg.= txt >>> >>> >> >> >> -- >> Glenn Rempe >> >> email =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 : glenn@rempe.us >> voice =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 : (415) 894-5366 or (415)-89G-LENN >> twitter =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0: @grempe >> contact info =A0 =A0 =A0 =A0: http://www.rempe.us/contact.html >> pgp =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0: http://www.rempe.us/gnupg.t= xt >