incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Glenn Rempe <>
Subject Re: Timeout Error when trying to access views + Indexing problems
Date Thu, 15 Oct 2009 16:50:49 GMT

I have not seen CouchDB consume large amount of memory so
unfortunately I cannot provide insight into your issue (btw I bulk
save docs 5000 at a time).  I would suggest starting a new discussion
thread (instead of using this one) with specific details of how your
are using couchdb, how many and what type of docs are you trying to
bulk save, and what exactly are you seeing for memory usage, and what
processes specifically are spiking in memory.  The more details you
provide the better the CouchDB team and this group might be able to
provide you with some help.



On Thu, Oct 15, 2009 at 5:27 AM, venkata subbarayudu
<> wrote:
> Hi Gleen,
>           I am new to couchdb, and doing some load testing on
> couchdb(0.10), I am using python(2.6.2 with couchdb-python-0.6) to insert
> documents(bulk-save) into couchdb, and couchdb is eating up all the
> available memory, but I'm not able to figureout what exactly causing couchdb
> to consume such a large memory (I have 32GB Machine..), please give me any
> insights to know how to debug this issue to know the root cause.
> Thanks in advance for all your help,
> Subbarayudu.
> On Wed, Oct 7, 2009 at 1:09 PM, Brian Candler <> wrote:
>> On Tue, Oct 06, 2009 at 01:36:13PM -0700, Glenn Rempe wrote:
>> > No.  I have not noticed any correlation with the time.  Sometimes I
>> > have seen it run during the day and die as well.  I've seen it die
>> > lots of times...  ;-)  It seems like it is always dying though
>> > somewhere between 2 and 6 million records processed out of 28 mm
>> > (which might support the theory of memory starvation of some kind if
>> > it is holding some of those records in memory unintentionally, even
>> > though top reports nothing more than 4GB out of 15GB being used).
>> You might have a per-process memory limit of some sort: either ulimit (see
>> "ulimit -a"); or a hard-coded limitation in your O/S which limits a single
>> process to 4GB, for example; or conceivably the erlang VM could have a 4GB
>> limit.
>> [I do vaguely remember something about people saying you should build
>> erlang
>> in 32-bit mode even under a 64-bit OS, but I could well have that wrong]
>> Either way, if your process memory usage is continually growing and also
>> approaching 4GB, I would be concerned. I don't see any reason why building
>> a
>> view should take an increasing amount of memory. It sounds like a leak.
>> Regards,
>> Brian.

Glenn Rempe

email                 :
voice                 : (415) 894-5366 or (415)-89G-LENN
twitter                : @grempe
contact info        :
pgp                    :

View raw message