couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Adam Kocoloski (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (COUCHDB-2058) CouchDB Memory Leak - Beam.smp
Date Thu, 13 Feb 2014 15:28:21 GMT

    [ https://issues.apache.org/jira/browse/COUCHDB-2058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13900424#comment-13900424
] 

Adam Kocoloski commented on COUCHDB-2058:
-----------------------------------------

[~rohit12sh], when Paul was asking for the stats inside the VM he meant the {{erlang:memory([atom,
atom_used, processes, processes_used, binary, code, ets]).}} invocation.  So far every time
you've posted that the output indicates negligible RAM usage in the pure Erlang parts of the
stack.  Can you confirm that at least one of your reports was taken when CouchDB was holding
on to all that RAM?  It's a little hard to tell when reading the backlog.

> CouchDB Memory Leak - Beam.smp
> ------------------------------
>
>                 Key: COUCHDB-2058
>                 URL: https://issues.apache.org/jira/browse/COUCHDB-2058
>             Project: CouchDB
>          Issue Type: Bug
>      Security Level: public(Regular issues) 
>          Components: Database Core
>            Reporter: Rohit Sharma
>
> Hello,
> I am experiencing performance issue with CouchDB.
> Use Case: I am working on a process that retrieves the data from RDBMS and process them
into JSON document and POST them to the CouchDB.
> I am trying to POST around half a million documents, most of them in batches (_bulk_doc)
of 10,000 and have tried with batch of 5,000, 15,000, and 20,000.
> Whole process takes around 90-100 minutes.
> During the life of the process, Memory Consumption by CouchDB keeps on growing and memory
is not released when CouchDB has finished working.
> So if the memory consumption by CouchDB was 60% at the time process finishes, memory
consumption will remain 60% and not reducing. 
> Subsequently, when the process starts running again. memory consumption is Maxed out
and CouchDB restarts itself. This restart fails the process that I am running. Looking at
the Syslogs , I see Out Of Memory Error by the CouchDB process and killing statement.
> The CouchDb process that has the issue is the "beam.smp" of Erlang.
> At this point, I have tried upgrading the memory of the server to see if this resolves
the issue, unfortunately, the issue persists. Memory Leak is there and Usage keeps on growing
until CouchDB restarts/crashed.
> I also have tried running garbage collection from Erlang command (erlang:garbage_collect().)
line but it didn't do anything.
> At this point, I am out of ideas and not sure what is going on here. Any input/suggestion
is highly appreciated!
> Env:
> Platform: Linux (Red Hat release 6.4 (Santiago))
> CouchDB: 1.3 and have tried with 1.5 as well
> RAM: Tried with 2G, 4G, and 8G
> CPU: 2 cores
> Process:/usr/lib64/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib64/erlang



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message