couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sho Fukamachi <>
Subject Re: replication error
Date Sun, 01 Feb 2009 16:52:48 GMT

On 02/02/2009, at 2:23 AM, Adam Kocoloski wrote:
> Hi Sho, are you getting req_timedout errors in the logs?  It seems a  
> little weird to me that ibrowse starts the timer while it's still  
> sending data to the server; perhaps there's an alternative we  
> haven't noticed.

Yeah. Like this:

[<0.166.0>] retrying couch_rep HTTP post request due to {error,  
req_timedout}: "http://localhost:2808/media/_bulk_docs"

then bombs out:

[error] [emulator] Error in process <0.166.0> with exit value:  

After the 10 retries it gives an error report but I assume you know  
what it says .. if not I can post it. Anyway, it finishes eventually,  
just needs a lot of babysitting.

> There's no way to change the request timeout or bulk docs size at  
> runtime right now, but if you don't mind digging into the source  
> yourself you change these as follows:
> 1) request timeout -- line 187 of couch_rep.erl looks like
> case ibrowse:send_req(Url, Headers, Action, Body, Options) of
> You can add a timeout in milliseconds as a sixth parameter to  
> ibrowse:send_req.  The default is 30000.  I think the atom  
> 'infinity' also works.

OK, I tried this. Unfortunately I have no idea what I am doing in  
Erlang so completely screwed it up. I made it this:

case ibrowse:send_req(Url, Headers, Action, Body, Options, 120000)

Compiles fine but now throws this error if I try to replicate:

[error] [<0.50.0>] Uncaught error in HTTP request: {error, 

No doubt every Erlang programmer here wants to punch me for doing  
something that dumb, but putting that aside for the moment .. any  
hints? : )

> 2) bulk_docs size -- The number "100" is mentioned three times in  
> couch_rep:get_doc_info_list/2.  You can lower that to something that  
> works better for you.

Well, a change in 1 place seems better than in 3 places .. I'll stick  
to the timeout for now.

My feeling is that CouchDB should probably start reducing the bulk  
docs size, or increasing the timeout, or both, automatically when it  
hits a timeout error - or making them configurable in local.ini. As  
discussed here before, people are using Couch to store largish  
attachments, and this is an intended use, so this kind of thing will  
definitely come up again. Or, of course, if the upcoming multipart  
feature will solve all of this, then .. not, heh.

Thanks a lot for the help ..


View raw message