incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Glenn Rempe <gl...@rempe.us>
Subject Timeout Error when trying to access views + Indexing problems
Date Sat, 03 Oct 2009 16:10:41 GMT
Hello all,
I am looking for some guidance on how I can eliminate an error I am seeing
when trying to access views, and help with getting through indexing a large
design document.

Yesterday I upgraded to a trunk install of CouchDB (0.11.0b) in an attempt
to resolve my second problem (see below). I have a DB that currently has
about 16 million records in it and I am in the midst of importing more up to
a total of about 26 million.  Yesterday when I would try to access one of my
map/reduce views I would see the indexing process kick off in the Futon
status page and I would see the couchjs process in 'top'.  But today, if I
try to access any view I see the following error from CouchDB within about 3
seconds from requesting any view:

http://pastie.org/640511

The first few lines of it are:

Error: timeout{gen_server,call,
    [couch_view,
     {get_group_server,<<"searchlight_production">>,
         {group,
             <<95,25,15,251,46,213,137,116,110,135,150,210,66,56,105,172>>,
             nil,nil,<<"_design/SearchDocument">>,<<"javascript">>,[],
             [{view,0,


I have tried without success restarting the CouchDB several times.

Any thoughts as to what might be happening here and how I might prevent it?

Related to this is my second problem.  Whenever I have tried to index a view
of this large DB the indexing process seems to silently die out after a
while and it never get through indexing the whole DB.  I have seen it get
through 10's of thousands up to a few million docs before dying (out of
millions).  Questions:

- Is there a recommended method to figure out what is happening in the
internals of the indexing that may be causing it to fail?
- If indexing fails before having gone through the entire result set at
least once does it continue where it left off at the last crash?  Or does it
need to start the whole indexing process over from scratch?
- How can I best ensure that my large DB gets fully indexed?

Thank you for the help.

Glenn

-- 
Glenn Rempe

email                 : glenn@rempe.us
voice                 : (415) 894-5366 or (415)-89G-LENN
twitter                : @grempe
contact info        : http://www.rempe.us/contact.html
pgp                    : http://www.rempe.us/gnupg.txt

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message