couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adam Kocoloski <>
Subject Re: Attachment Replication Problem - Bug Found
Date Sat, 16 May 2009 12:45:46 GMT
Hi Antony, thank you!  I encountered this "one more byte" problem once  
before, but it wasn't 100% reproducible, so I wasn't really  
comfortable checking in a workaround.  I've basically been waiting to  
see if it would ever show up for anyone else :-/

I think we should commit this change, but I'd still like to confirm  
that the attachment on the target is not corrupted by the chunk  
processing issue (i.e. the last chunk starts with a \r or something  
like that).  Or even better, fix the chunk processing issue.

Now, on to the checkpointing conditions.  I think there's some  
confusion about the attachment workflow.  Attachments are downloaded  
_immediately_ and in their entirety by ibrowse, which then sends the  
data as 1MB binary chunks to the attachment receiver processes.  The  
data sits in these processes' mailboxes until the next checkpoint.   
The flow control occurs entirely in Couch, not in ibrowse or the TCP  
layer.  We shouldn't end up with too many open connections this way --  
but if we do, we can tweak the max_connections and max_pipeline_size  
ibrowse parameters to throttle it.

You all appear to be right, the pending attachment data are not  
considered when deciding when to checkpoint.  That's a major bug and a  
regression in my opinion.  My bad.

In another thread Matt Goodall suggested checkpointing after a certain  
amount of time has passed.  So we'd have a checkpointing algo that  

* memory utilization
* number of pending writes
* time elapsed

Anything else we ought to take as an input?  I've got some time to  
hack on this today.


On May 16, 2009, at 5:08 AM, Antony Blakey wrote:

> On 16/05/2009, at 12:59 PM, Antony Blakey wrote:
>> and truncate the binary to the expected length. I'm not familiar  
>> with ibrowse in terms of debugging this problem further.
> The final mod I've ended up with is this, which deals with the  
> ibrowse problem:
> ------------------------------------------------------------------------------
> write_streamed_attachment(_Stream, _F, 0, SpAcc) ->
>    {ok, SpAcc};
> write_streamed_attachment(Stream, F, LenLeft, nil) ->
>    Bin = F(),
>    TruncatedBin = check_bin_length(LenLeft, Bin),
>    {ok, StreamPointer} = couch_stream:write(Stream, TruncatedBin),
>    write_streamed_attachment(Stream, F, LenLeft -  
> size(TruncatedBin), StreamPointer);
> write_streamed_attachment(Stream, F, LenLeft, SpAcc) ->
>    Bin = F(),
>    TruncatedBin = check_bin_length(LenLeft, Bin),
>    {ok, _} = couch_stream:write(Stream, TruncatedBin),
>    write_streamed_attachment(Stream, F, LenLeft -  
> size(TruncatedBin), SpAcc).
> check_bin_length(LenLeft, Bin) when size(Bin) > LenLeft ->
>    <<ValidData:LenLeft/binary, Crap/binary>> = Bin,
>    ?LOG_ERROR("write_streamed_attachment has written too much  
> expected: ~p got: ~p tail: ~p", [LenLeft, size(Bin), Crap]),
>    ValidData;
> check_bin_length(_, Bin) -> Bin.
> ------------------------------------------------------------------------------
> Interestingly, the problems occur at the exactly the same points  
> during replication, and in each case the excess tail is <<"\r">>,  
> which suggests to me a boundary condition processing a chunked  
> response. It's probably not a problem creating the response because  
> direct access using wget returns the right amount of data.
> My replication still fails near the end, this time silently killing  
> couchdb, but it's getting closer.
> Antony Blakey
> --------------------------
> CTO, Linkuistics Pty Ltd
> Ph: 0438 840 787
> Always have a vision. Why spend your life making other people’s  
> dreams?
> -- Orson Welles (1915-1985)

View raw message