incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adam Kocoloski <kocol...@apache.org>
Subject Re: Read request throughput
Date Thu, 02 Dec 2010 14:41:55 GMT
On Dec 2, 2010, at 6:29 AM, Huw Selley wrote:

>> include_docs=true is definitely more work at read time than embedding the docs in
the view index.  I'm not sure  about your application design constraints, but given that your
database and index seem to fit entirely in RAM at the moment you could experiment with emitting
the doc in your map function instead ...
>> 
>>> The total amount of data returned from the request is 1467 bytes.
>> 
>> ... especially when the documents are this small.
> 
> Sure, but I would have expected that to only really help if the system was contending
for resources? I am using linked docs so not sure about emitting the entire doc in the view.

Didn't realize you were using linked docs.  You're certainly right, there's no way to emit
those directly.

>> Hmm, I've heard that we did something to break compatibility with 12B-5 recently.
 We should either fix it or bump the required version.  Thanks for the note.
> 
> COUCHDB-856?

Ah, right. That one was my fault.  But Filipe fixed it in r1034380, so it shouldn't have caused
you any trouble here.

>> Do you know if the CPU load was spread across cores or concentrated on a single one?
 One thing Kenneth did not mention in that thread is that you can now bind Erlang schedulers
to specific cores.  By default the schedulers are unbound; maybe RHEL is doing a poor job
of distributing them.  You can bind them using the default strategy for your CPUs by starting
the VM with the "+sbt db" option.
> 
> It was using most of 2 cores. I had a go with "+sbt db" and it didn't perform as well
as "-S 16:2".
> 
> WRT disabling HT - I need to take a trip to the datacentre to disable HT in the bios
but I tried disabling some cores with:
> 
> echo 0 > /sys/devices/system/node/nodeX/cpuX/online
> 
> Which should stop the kernel seeing the core - not as clean as disabling it in the bios
but should suffice. /proc/cpuinfo stopped showing the cores I removed so it looks like it
worked.
> Again I didn't see any improvement.

Ok, interesting.  When you request an up-to-date view there are basically 7 Erlang processes
involved: one HTTP connection handler, two couch_file servers (one for .couch and one for
.view), a couch_db server, a couch_view_group server, and then two registered processes (couch_server
and couch_view).  When you send additional concurrent requests for the same view CouchDB spawns
off additional HTTP handlers to do things like JSON encoding and header processing, but these
other six processes just need to handle the additional load themselves.

The fact that you only saw two cores regularly used suggests that one of these processes turned
into a bottleneck (and when they weren't blocked, the other processes ran on the second core).
 My guess would be the DB couch_file, since every view request was hitting it multiple times:
once to open the ddoc and N times to load the linked documents.  But that's just a guess.
 I'm mildly surprised that you see a significant gain from dropping down to 2 active schedulers,
and it's not a mode of operation I would recommend if you plan to have multiple active databases.
 But I can see where it might help this particular benchmark a bit.

This is the first time I've seen someone try to maximize the throughput for this particular
type of request, so I don't have any more bright suggestions.  If I'm right about the cause
of the bottleneck I can think of new optimizations we might add to reduce it in the future,
but nothing in terms of tweaks to the server config.  Regards,

Adam

> 
> Cheers
> Huw


Mime
View raw message