couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Kimber <>
Subject RE: CouchDB slow response times
Date Fri, 20 Apr 2012 08:27:37 GMT
Performance is relative and effective performance is very much determined by the use case i.e.
we do analytics with couchdb its faster than a traditional RDBMS in many cases (especially
if your views are queried regularly) on less hardware (disk space not included, but that's
a trade off and compression in 1.2 helps greatly here) and is easier to use for document analysis.
However it may not be a great fit for very high read use cases currently. If that's your use
case then there are other options i.e. Redis (possibly as a front end to Couch) or dare I
say it here Mongodb and Couchbase or numerous other commercial options from in-memory databases
to column orientated databases, but again it depends on the use case.

You may want to describe your use case i.e. what you are trying to accomplish to  allow the
community to provide  informed comment on your observations.



-----Original Message-----
From: Attila Nagy [] 
Sent: 20 April 2012 08:35
Subject: Re: CouchDB slow response times

On 04/19/12 08:28, Attila Nagy wrote:
> So getting an exact document took .098921 seconds (nearly 98.9 
> milliseconds) on a completely idle machine.
> Any subsequent queries are in the order of the above response time, 
> which is just slow.
> Is this what CouchDB and Erlang capable of, or something is wrong in 
> my setup? I haven't turned compression off, BTW, but will measure its 
> effect.
Without compression:
07:43:03.822390 HTTP GET /test/1
07:43:03.823475 HTTP/1.1 200 OK
07:43:03.919761 the JSON data
so the response time is .097371 seconds (97.37 ms)

In the mean time, I've found that somewhere in time CouchDB/HTTPd turned 
TCP NODELAY to off, so
socket_options = [{nodelay, true}]
gives: 2.47 ms response time, which is a major increase.
I could lower that down to 2.1 ms by switching to 
null_authentication_handler, which is not good, but better.

On query performance: when I fetch the same documents (one by one, ID 
number one to the last) from three different machines on four threads on 
each of them (so 12 concurrent HTTP GETs can be on the wire), I can 
saturate one CPU core (Xeon X5670 @ 2.93GHz, I've limited it to one 
core) to 100% CouchDB and can get about 1700 query/sec performance.
These are just plain HTTP GETs, so no JSON parsing is involved.
Switching to persistent connections gives 2200 query/sec (again, CouchDB 
maxes the CPU out).

I hope some day CouchDB will be able to deliver performance too.

View raw message