incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adam Kocoloski <>
Subject Re: replication error
Date Sun, 01 Feb 2009 15:23:57 GMT
On Feb 1, 2009, at 3:48 AM, Sho Fukamachi wrote:

> On 30/01/2009, at 6:03 PM, Adam Kocoloski wrote:
>> Hi Jeff, it's starting to make some more sense now.  How big are  
>> the normal attachments?  At present, Couch encodes all attachments  
>> using Base64 and inlines them in the JSON representation of the  
>> document during replication.  We'll fix this in the 0.9 release by  
>> taking advantage of new support for multipart requests[1], but  
>> until then replicating big attachments is iffy at best.
> In the meantime, is there any way to increase the timeout, or limit  
> the number of docs couch tries to send in one bulk_docs transaction?  
> Replication is failing for me even with attachments under 500K,  
> since my upload speed from home isn't that good.
> Sho

Hi Sho, are you getting req_timedout errors in the logs?  It seems a  
little weird to me that ibrowse starts the timer while it's still  
sending data to the server; perhaps there's an alternative we haven't  
noticed.  There's no way to change the request timeout or bulk docs  
size at runtime right now, but if you don't mind digging into the  
source yourself you change these as follows:

1) request timeout -- line 187 of couch_rep.erl looks like

case ibrowse:send_req(Url, Headers, Action, Body, Options) of

You can add a timeout in milliseconds as a sixth parameter to  
ibrowse:send_req.  The default is 30000.  I think the atom 'infinity'  
also works.

2) bulk_docs size -- The number "100" is mentioned three times in  
couch_rep:get_doc_info_list/2.  You can lower that to something that  
works better for you.

Best, Adam

View raw message