couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vasili Batareykin <>
Subject Re: perfomance?
Date Wed, 24 Mar 2010 10:01:25 GMT
about keepalive:
GET /uri HTTP/1.0
Connection: Keep-Alive
Host: somehost
User-Agent: someagent
Accept: */*

HTTP/1.1 200 OK
Date: Wed, 24 Mar 2010 09:44:36 GMT
Server: megaserver
Last-Modified: Sun, 22 Jul 2007 17:00:00 GMT
ETag: "4d6436-8cdd-435dd17316400"
Accept-Ranges: bytes
Content-Length: 36061
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Content-Type: somecontent/type

mandory field in req is Connection: Keep-Alive.
mandory(imho) fields in reply:
Content-Length: 36061 (how much read ...)
Keep-Alive: timeout=15, max=100 (timeouts)
Connection: Keep-Alive (and this or "close" if not support this feature)

but what we see in couchdb's reply to same req:

HTTP/1.0 200 OK
Server: CouchDB/0.10.0 (Erlang OTP/R13B)
Date: Wed, 24 Mar 2010 09:37:27 GMT
Content-Type: text/plain;charset=utf-8
Content-Length: 41
Cache-Control: must-revalidate

Connection timeout n/a
Connection type n/a

i think RFC better source than my dump)

in my case (replicated+vers file storage) erlang eats all CPU to process
mb there is better way to get files from couchdb? by version or by e-tag?
or increase some kind buffers or memory pages?
sorry i don't know internals.

2010/3/24 Randall Leeds <>

> Yes, I do mean KeepAlive, sorry for the confusion.
> CouchDB should support it. Can you show a dump of the headers received
> by Couch somehow? Maybe there is something silly like an issue of case
> with the headers.
> CouchDB cannot use sendfile (or some Erlang way to do the same)
> because it puts bytes in the middle of the attachment on disk at
> regular intervals. As I understand it, this is so that you can store
> attachments that contain something like a database header without
> breaking your database. Otherwise, storing something that looks like
> the BTree header in an attachment could cause couch to open the
> database incorrectly after an odd crash situation.
> This behavior maybe sounds very paranoid and strange, but one thing is
> for sure: CouchDB is designed to be *robust*. Really, really, _really_
> hard to lose your data.
> On Wed, Mar 24, 2010 at 01:01, Vasili Batareykin <>
> wrote:
> > pipelining? you mean keepalive? ab hold test if you supply -k option (Use
> > HTTP KeepAlive feature) seems that couchdb's httpd don't know about this)
> > yes, throughput(in b/s) is better,but on localhost, if i use same test
> with
> > nginx i get around 1000 #/sec on 340k file. (344294.81 [Kbytes/sec]).
> yes,
> > nginx use sendfile for this operation. yes fs cache used too. but 70
> #/sec
> > with couchdb ...
> >
> > 2010/3/24 Randall Leeds <>
> >
> >> If you multiple (#/sec) by file size, are actually getting _better_
> >> throughput with the larger files.
> >> Do you know if ab command uses HTTP 1.1 pipelining? If not, HTTP
> >> overhead would explain the extra time.
> >>
> >

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message