incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From venkata subbarayudu <avsrit2...@gmail.com>
Subject CouchDB eating up all available memory
Date Thu, 15 Oct 2009 18:06:23 GMT
Hi All,
             I am new to couchdb, and was installed couchdb:0.10.0a781732
(16G Ram,linux-centOS). I was using python-couchdb-0.6 to interact with
couchdb. I am doing a bulk insert (batchSize of 1000) to couchdb, and one
document does have roughly about 50 fields, out of which 10 are Integer and
the other are Textfields, I was using Hadoop Map/Reduce tasks (using python)
for writing to couchdb, I was trying to insert about 1 million records to
couchdb (to different databases), This causes to use all the available
memory(~16G), and even if the process (insertion) completes the memory
doesn't come down, I was not understanding whether is this really because of
couchdb. and If I see the CouchdbProcess memory it shows only 2GB. (similar
to : http://issues.apache.org/jira/browse/COUCHDB-325 ) and if the memory is
grown because of caches, is the cache files are created by couchdbProcess?.
is there a way to specify the MaxMemory allocated for Couchdb?. Please give
any suggestion on how to restrict memory for couchdb-process.

Thanks in advance for your help,
Subbarayudu.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message