couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joran Greef <>
Subject Re: Concurrent Requests From Multiple Clients For The Same Resource
Date Mon, 18 May 2009 13:22:58 GMT
Hi Adam,

Great, thanks for your quick reply.

I'm running 0.9.0 so that would explain the response times. And good  
news to hear about the upcoming JSON encoding improvements. Thanks for  
the link to Paul's message, I would help out but need to get more  
familiar with SpiderMonkey since I've been using Rhino and only just  
recently started with that.

Looking forward to 0.9.1.

Thanks Joran

On 18 May 2009, at 3:09 PM, Adam Kocoloski wrote:

Hi Joran, can I ask what version of CouchDB you are running?  There's  
a bug in 0.9.0 that causes it to report incorrect (too low) response  
times with concurrent requests.  The bug is fixed in trunk and will  
also be fixed in the 0.9.1 release.

When I do this test on trunk I get CouchDB reporting mean response  
times in the 1500ms range, in agreement with what you see in Rhino.

Now, as far as why CouchDB slows down so much.  The request you're  
making in this test requires a good bit of JSON marshaling.  The BEAM  
process on my laptop was using a steady ~140% of the CPU while  
handling those 4 simultaneous connections, and it would've taken more  
if the clients weren't each grabbing ~10%.

There's definitely some work coming down the pipe to improve JSON  
encoding efficiency.  In fact, if you feel like getting involved you  
could test out Paul Davis' new experimental work on this front:

Cheers, Adam

On May 18, 2009, at 8:22 AM, Joran Greef wrote:

> Hi everyone,
> I opened up several Rhino shells this morning and ran the following  
> code from each of them at the same time:
> var test = function () {
>   for (var index = 0; index < 40; index++) {
>       var start = new Date().getTime(), options = {output:"", err:""};
>       runCommand("curl", " 
> ", options);
>       print((new Date().getTime() - start) + "ms");
>   }
> };
> test();
> It causes each Rhino shell to make 40 requests to Couch and for each  
> displays the time taken to complete the request. I gave the first  
> shell a head start, then started test() in another and so on, until  
> 4 shells were making requests concurrently.
> The first couple requests in the first shell took 500ms on average  
> to retrieve 527kb. But I was surprised to see that as each of the  
> other shells started kicking in and making requests, the average  
> response time doubled accordingly and grew from +/-500ms to  
> +/-1000ms to +/-1500ms to +/-2000ms for all 4 shells as if Couch was  
> queueing the concurrent requests and handling them in serial.
> Couch stats reported an average of 400ms response time (excluding  
> Mochiweb) for the duration of the test. Could it be that while Couch  
> can handle concurrent requests in parallel, Mochiweb cannot and  
> blocks?
> Thanks,
> Joran Greef

View raw message