couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sivan Greenberg <si...@omniqueue.com>
Subject Re: beam CPU hog
Date Wed, 28 Jul 2010 09:55:18 GMT
Another odd thing is that I don't seem to realize why after a 100-300
runs of the script time after another, from a point onward, it would
fail on the setUp method when trying to save the doc- without any
apparent failure in the runTest that could cause the tearDown not to
run, and leave a doc residual in the db causing the version
conflict....

Sivan

On Wed, Jul 28, 2010 at 12:26 PM, Sivan Greenberg <sivan@omniqueue.com> wrote:
> On Wed, Jul 28, 2010 at 12:02 PM, Randall Leeds <randall.leeds@gmail.com> wrote:
>> 1) Spinning up a replication means a bunch of HTTP requests. The fact
>> that the requests are local only means you're not seeing network
>> latency and your cpu is more pinned than in the real world. You could
>> try creating 100 documents first and causing conflicts on all of them
>> in a single replication or replace your urls with bare db names (i.e.,
>> just 'session_store_rep' and 'session_store') to get local replication
>> which bypasses the HTTP layer. The latter option will also reduce the
>> amount of json encoding/decoding you're doing.
>
> I'm actually in wish to experience closest to real world things, so
> bypassing it feels to me as running the test less rigorously. Please
> correct me if I'm wrong as there's a chance I didn't get to the bottom
> of this still. I did change the target to not use http as this is how
> it will be in the real deployment, thanks for noticing that :)
>
>>
>> 2) Give us more info about your problem with 'load'. You really
>> shouldn't care about the cpu load. How long your test takes is much
>> more important. If you're getting a decent number of operations/second
>> and your cpu is pinned you should be thrilled.
>
> The problem is that the servers designated to host CouchDB are very
> strong servers (8Gigs of RAM, losts of CPU and cores) , but still do a
> couple of others things like http server and possibly a couple more
> services. So when CPU is hogged , performance of web apps is effected.
>
> However- I would not care too much about this to start with if
> CouchDB's performance would actually provide the result- the idea
> about this is to have the conflict resolved in real or near real time
> manner, to be as coherent as possible with the winning doc or latest
> version of a shopping cart. So right now, when the db is a bit big
> (less then 1G still) operations take long such that the expected
> outcome of conflicts cleared (e.g. _conflicts goes away form the doc
> obj when asked for with conflict=true) does not happen, when
> simulating a user action triggering fetching his session details, he
> gets the wrong version...
>
> For the record, no I can't use sticky sessions :) (this has gone up
> once or twice)
>
>> Hoping this helps you out :)
>
> This is a start :-)
>
> Many thanks so far!
>
> Sivan
>

Mime
View raw message