incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dave Cottlehuber <d...@muse.net.nz>
Subject Re: couch attachments versus amazon S3
Date Thu, 12 Jan 2012 23:37:50 GMT
On 12 January 2012 18:12, Mark Hahn <mark@hahnca.com> wrote:
> Thanks.  I need to go back to the drawing board.
>
> On Thu, Jan 12, 2012 at 9:04 AM, Robert Newson <rnewson@apache.org> wrote:
>
>> the max_document_size default value is, I hope obviously, very silly
>> indeed. Any adventurous soul that would like to send in a 4GB json
>> blob could satisfy my curiosity by reporting how long it takes couchdb
>> to decode it here. :)
>>
>> A json document needs to be fully held in memory to be useful, which
>> is why you might want to insist on a limit. Attachments, because they
>> are uninterpreted binary, are streamed in and out. There's a structure
>> that *is* held in memory consisting of the file offsets (and lengths)
>> of each attachment chunk. At some point, that structure might be
>> prohibitively large, but there's no inherent limit to attachment size
>> beyond disk capacity.
>>
>> B.
>>
>> On 12 January 2012 17:00, Nils Breunese <N.Breunese@vpro.nl> wrote:
>> > Mark Hahn wrote:
>> >
>> >> Hmmm.  I wonder where I got that idea.  Maybe it is the max size of a
>> doc.
>> >
>> > max_document_size is configurable in local.ini. The default value is
>> 4294967296 (bytes, so 4 GB, see default.ini).
>> >
>> > Nils.
>> > ------------------------------------------------------------------------
>> >  VPRO   www.vpro.nl
>> > ------------------------------------------------------------------------
>>

As Bob says, attachments are written to disk as they're received. But there is
no partial restart if a replication fails mid attachment, so your disk
file may grow
large quickly in this case. Compaction would of course remove the partials.

It might not be important in your use case.

A+
Dave

Mime
View raw message