couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Davis <paul.joseph.da...@gmail.com>
Subject Re: Timeout Error when trying to access views + Indexing problems
Date Sun, 04 Oct 2009 02:59:25 GMT
On Sat, Oct 3, 2009 at 9:46 PM, Paul Joseph Davis
<paul.joseph.davis@gmail.com> wrote:
> Glenn,
>
> This sounds like your map function is timing out which causes the error. You
> could try upping the os process timeout setting in the config.
>
> To see what's going on you can increase to debug logging or use the log
> function in your maps. There's also the status page in futon which I think
> you said you were looking at.
>
> If indexing crashes it should just pick up where it left off when you
> retrigger. Use the status page to verify. If it's not then let us know.
>
> If you can't find anything in the debug logs then ping the lust and we'll
> get into trying to duplicate.
>

Phone fail. Ping the *list* rather.

Paul

> Paul Davis
>
> On Oct 3, 2009, at 9:24 PM, Glenn Rempes <glenn@rempe.us> wrote:
>
>> Slightly more info on this.  I see the following stack trace when this
>> happens:
>> [Sun, 04 Oct 2009 01:18:41 GMT] [info] [<0.3343.0>] Stacktrace:
>> [{gen_server,call,2},
>>            {couch_view,get_group_server,2},
>>            {couch_view,get_group,3},
>>            {couch_view,get_map_view,4},
>>            {couch_httpd_view,design_doc_view,5},
>>            {couch_httpd_db,do_db_req,2},
>>            {couch_httpd,handle_request,5},
>>            {mochiweb_http,headers,5}]
>>
>>
>> And I was suspecting that perhaps it was related to low ram or cpu on the
>> EC2 instance I am running on (with the couchdb on an EBS volume) and
>> upgraded to an extra large with 15GB RAM, and four cores.
>>
>> No difference at all.  I get this error now almost instantly whenever I
>> select any of the views you see in the pastie below in the single design
>> doc.
>>
>> Help!?  :-)
>>
>> Thanks.
>>
>> Glenn
>>
>> On Sat, Oct 3, 2009 at 9:10 AM, Glenn Rempe <glenn@rempe.us> wrote:
>>
>>> Hello all,
>>> I am looking for some guidance on how I can eliminate an error I am
>>> seeing
>>> when trying to access views, and help with getting through indexing a
>>> large
>>> design document.
>>>
>>> Yesterday I upgraded to a trunk install of CouchDB (0.11.0b) in an
>>> attempt
>>> to resolve my second problem (see below). I have a DB that currently has
>>> about 16 million records in it and I am in the midst of importing more up
>>> to
>>> a total of about 26 million.  Yesterday when I would try to access one of
>>> my
>>> map/reduce views I would see the indexing process kick off in the Futon
>>> status page and I would see the couchjs process in 'top'.  But today, if
>>> I
>>> try to access any view I see the following error from CouchDB within
>>> about 3
>>> seconds from requesting any view:
>>>
>>> http://pastie.org/640511
>>>
>>> The first few lines of it are:
>>>
>>> Error: timeout{gen_server,call,
>>>   [couch_view,
>>>    {get_group_server,<<"searchlight_production">>,
>>>        {group,
>>>            <<95,25,15,251,46,213,137,116,110,135,150,210,66,56,105,172>>,
>>>            nil,nil,<<"_design/SearchDocument">>,<<"javascript">>,[],
>>>            [{view,0,
>>>
>>>
>>> I have tried without success restarting the CouchDB several times.
>>>
>>> Any thoughts as to what might be happening here and how I might prevent
>>> it?
>>>
>>> Related to this is my second problem.  Whenever I have tried to index a
>>> view of this large DB the indexing process seems to silently die out
>>> after a
>>> while and it never get through indexing the whole DB.  I have seen it get
>>> through 10's of thousands up to a few million docs before dying (out of
>>> millions).  Questions:
>>>
>>> - Is there a recommended method to figure out what is happening in the
>>> internals of the indexing that may be causing it to fail?
>>> - If indexing fails before having gone through the entire result set at
>>> least once does it continue where it left off at the last crash?  Or does
>>> it
>>> need to start the whole indexing process over from scratch?
>>> - How can I best ensure that my large DB gets fully indexed?
>>>
>>> Thank you for the help.
>>>
>>> Glenn
>>>
>>> --
>>> Glenn Rempe
>>>
>>> email                 : glenn@rempe.us
>>> voice                 : (415) 894-5366 or (415)-89G-LENN
>>> twitter                : @grempe
>>> contact info        : http://www.rempe.us/contact.html
>>> pgp                    : http://www.rempe.us/gnupg.txt
>>>
>>>
>>
>>
>> --
>> Glenn Rempe
>>
>> email                 : glenn@rempe.us
>> voice                 : (415) 894-5366 or (415)-89G-LENN
>> twitter                : @grempe
>> contact info        : http://www.rempe.us/contact.html
>> pgp                    : http://www.rempe.us/gnupg.txt
>

Mime
View raw message