incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Randall Leeds <>
Subject Re: beam CPU hog
Date Wed, 28 Jul 2010 09:02:12 GMT
On Wed, Jul 28, 2010 at 01:41, Sivan Greenberg <> wrote:
> Just another note, the problem see to grow larger when the database
> size expands. I am going to time each operation now to see if I can
> find a specific culprit.

I looked at your code and I've got a couple things for you to try.

1) Spinning up a replication means a bunch of HTTP requests. The fact
that the requests are local only means you're not seeing network
latency and your cpu is more pinned than in the real world. You could
try creating 100 documents first and causing conflicts on all of them
in a single replication or replace your urls with bare db names (i.e.,
just 'session_store_rep' and 'session_store') to get local replication
which bypasses the HTTP layer. The latter option will also reduce the
amount of json encoding/decoding you're doing.

2) Give us more info about your problem with 'load'. You really
shouldn't care about the cpu load. How long your test takes is much
more important. If you're getting a decent number of operations/second
and your cpu is pinned you should be thrilled. Imagine if you were
encoding an audio file and your cpu wasn't at 100%. You would be mad
because it would take longer than it has to. In general, if you can be
cpu bound you're doing things right as long as things are humming
along quickly. It means couchdb is fulfilling your request as quickly
as possible and neither the network nor the disk is a bottleneck.

Hoping this helps you out :)


View raw message