incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Davis <paul.joseph.da...@gmail.com>
Subject Re: CouchDB eating up all available memory
Date Thu, 15 Oct 2009 18:15:09 GMT
Venkata,

CouchDB doesn't do any sort of caching. Are there any views building
when you're checking the memory consumption? Other than view building
and inserting CouchDB shouldn't be retaining any allocated memory. Its
always possible that there's a bug somewhere though. Can you reduce
the issue to script that can trigger the behavior?

Paul Davis


On Thu, Oct 15, 2009 at 2:06 PM, venkata subbarayudu
<avsrit2005@gmail.com> wrote:
> Hi All,
>             I am new to couchdb, and was installed couchdb:0.10.0a781732
> (16G Ram,linux-centOS). I was using python-couchdb-0.6 to interact with
> couchdb. I am doing a bulk insert (batchSize of 1000) to couchdb, and one
> document does have roughly about 50 fields, out of which 10 are Integer and
> the other are Textfields, I was using Hadoop Map/Reduce tasks (using python)
> for writing to couchdb, I was trying to insert about 1 million records to
> couchdb (to different databases), This causes to use all the available
> memory(~16G), and even if the process (insertion) completes the memory
> doesn't come down, I was not understanding whether is this really because of
> couchdb. and If I see the CouchdbProcess memory it shows only 2GB. (similar
> to : http://issues.apache.org/jira/browse/COUCHDB-325 ) and if the memory is
> grown because of caches, is the cache files are created by couchdbProcess?.
> is there a way to specify the MaxMemory allocated for Couchdb?. Please give
> any suggestion on how to restrict memory for couchdb-process.
>
> Thanks in advance for your help,
> Subbarayudu.
>

Mime
View raw message