incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jan Lehnardt <...@apache.org>
Subject Re: Large attachments
Date Mon, 22 Nov 2010 16:23:53 GMT

On 22 Nov 2010, at 15:51, Bram Neijt wrote:

> Bit of a mis-understanding here, it is about downloads, not uploads.
> 
> For example:
> dd if=/dev/urandom of=/tmp/test.bin count=50000 bs=10240
> Put test.bin as an attachment in a coucdb database
> Run
> for i in {0..50};do curl http://localhost:5984/[test
> database]/[doc_id]/test.bin > /dev/null 2>&1 & done
> 
> This will create 50 curl processes which download from your couchdb.
> Looking at the memory consumption of couchdb, it seems like it is
> loading large parts of the file into memory.

Curious. Can you open a JIRA ticket for this?

  https://issues.apache.org/jira/browse/COUCHDB

Cheers
Jan
-- 

> 
> Bram
> 
> 
> On Mon, Nov 22, 2010 at 3:11 PM, Robert Newson <robert.newson@gmail.com> wrote:
>> Curl buffers binary uploads, depending on the manner you perform the operation.
>> 
>> B.
>> 
>> On Mon, Nov 22, 2010 at 2:03 PM, Bram Neijt <bneijt@gmail.com> wrote:
>>> I can reproduce this problem: if I upload a 500 MB and start 10
>>> concurrent curl commands, memory usage increase dramatically with the
>>> following environment:
>>> Description:    Ubuntu 10.10
>>> Release:        10.10
>>> Codename:       maverick
>>> {"couchdb":"Welcome","version":"1.0.1"}
>>> 
>>> Bram
>>> 
>>> On Tue, Nov 16, 2010 at 5:56 PM,  <evxdo@bath.ac.uk> wrote:
>>>> Well, I'm just doing a GET directly to the document_id + attachment:
>>>> http://localhost:5984/database/doc_id/attachment
>>>> 
>>>> Clicking on the attachment in Futon would have the same effect.
>>>> 
>>>> David
>>>> 
>>>> Quoting Jan Lehnardt <jan@apache.org>:
>>>> 
>>>>> Hi David,
>>>>> 
>>>>> On 16 Nov 2010, at 14:00, evxdo@bath.ac.uk wrote:
>>>>> 
>>>>>> Hi everyone,
>>>>>> 
>>>>>> I'm trying to work with some large attachments (around 1.5 GB). 
When I
>>>>>> go to download these (as a standalone attachment) the  CouchDB process
grows
>>>>>> in size by at least the size of the  attachment before the download
starts.
>>>>>> This implies that the  attachment is being loaded into memory entirely
>>>>>> before being sent  to the client. Has anyone else seen this behaviour?
Is
>>>>>> this a bug,  or is there a configuration change I can make to resolve
this?
>>>>>> 
>>>>>> I've tried disabling compression on attachments in case it's the
>>>>>>  compression that's causing the problem.
>>>>>> 
>>>>>> I'm using 1.0.1.
>>>>> 
>>>>> What does your request look like?
>>>>> 
>>>>> The standalone attachment API does not buffer.
>>>>> 
>>>>> Cheers
>>>>> Jan
>>>>> --
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>> 
>> 


Mime
View raw message