couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Geoghegan (JIRA)" <>
Subject [jira] Updated: (COUCHDB-567) Erlang View with Reduce Fails on Large Number of documents
Date Thu, 12 Nov 2009 00:26:39 GMT


Sean Geoghegan updated COUCHDB-567:

    Attachment: view.erl

I've attached a Ruby script that generates some test data and a of map and reduce function
for that data which triggers this error.

> Erlang View with Reduce Fails on Large Number of documents
> ----------------------------------------------------------
>                 Key: COUCHDB-567
>                 URL:
>             Project: CouchDB
>          Issue Type: Bug
>    Affects Versions: 0.10
>            Reporter: Sean Geoghegan
>         Attachments: generate-data.rb, view.erl
> I have been having a problem with running Erlang views over a large dataset.  Whenever
the indexer goes to checkpoint it's process the following error occurs:
> ** Last message in was {'EXIT',<0.2220.0>,
>                         {function_clause,
>                          [{couch_view_updater,view_insert_doc_query_results,
>                            [{doc,<<"73956fdca62c384849a3313e6c48b7ed">>,...
>                            [],
>                            [{{view,0,
>                                  [<<"_temp">>],
>                                  <<"...">>,
>                                  {btree,<0.2218.0>,
>                                      {1565615,{341,[0]}},
>                                      #Fun<couch_btree.3.83553141>,
>                                      #Fun<couch_btree.4.30790806>,
>                                      #Fun<couch_view.less_json_keys.2>,
>                                      #Fun<couch_view_group.11.46347864>},
>                                  [{<<"_temp">>,
>                                    <<"...">>}]},
>                              []}],
>                            [],[]]},
>                       {couch_view_updater,view_insert_query_results,4},
>                       {couch_view_updater,process_doc,4},
>                       {couch_view_updater,'-update/2-fun-0-',6},
>                       {couch_btree,stream_kv_node2,7},
>                       {couch_btree,stream_kp_node,6},
>                       {couch_btree,fold,5},
>                       {couch_view_updater,update,2}]]},
> This problem occurs regardless of the functionality of the map and reduce functions,
it seems to based on the time it takes to generate, or whatever causes the checkpoints to
get written out.
> I did some investigation into the problem by adding alot of LOG_INFO statements throughout
the code.  I was able to determine the following:
>    * the Erlang View process is being held on to by the view updater for the entire duration
of the indexing, 
>    * however after the first checkpoint is hit and the progress is written out, a reduce
call is made to the erlang view server, once this completes the view server is released back
to the cache using ret_os_process. 
>    * when the next reduce cycle occurs the same erlang view server is returned by get_os_process
but it is first sent a reset message which clears all the functions in the view servers state.
>    * when the next map cycles starts the view updater uses the same handle to the erlang
view server it had in the beginning. It assumes that the servers state is the same however
it has been reset so there are no view functions in the view server.  This causes the above
error when it then attempts to write out the result of a view function which doesn't exist
in the server.
> I was able to fix this problem by modifying line 139 of couch_view_updater.erl from this:
>    {[], Group2, ViewEmptyKeyValues, []}
> to this:
>  {[], Group2#group{query_server=nil}, ViewEmptyKeyValues, []}
> Which removes the view updater's handle to the erlang server proc, forcing it to get/create
a new one for each map cycle and setting up the view functions within the server.  I don't
know if this is the right way to do it, or if it has any bad side-effects, but it does prevent
the crash at least, and allow the indexing to complete correctly.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message