incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Newson <robert.new...@gmail.com>
Subject Re: perfomance?
Date Wed, 24 Mar 2010 10:12:15 GMT
iirc apachebench only speaks http/1.0 but uses a common violation to
support keep-alive. This likely confuses CouchDB which speaks
http/1.1.

keep-alive is also not the same as pipelining. keep-alive just reuses
connections, whereas HTTP pipelining sends multiple requests without
reading the responses. The responses are read as a batch later; this
allows you to largely circumvent the latency of a network roundtrip.

B.

P.S My local apachebench certainly uses http/1.0;

Benchmarking 127.0.0.1 (be patient)...INFO: POST header ==
---
GET / HTTP/1.0
Host: 127.0.0.1:5984
User-Agent: ApacheBench/2.3
Accept: */*


---
LOG: header received:
HTTP/1.0 200 OK
Server: CouchDB/0.11.0 (Erlang OTP/R13B)
Date: Wed, 24 Mar 2010 10:11:32 GMT
Content-Type: text/plain;charset=utf-8
Content-Length: 41
Cache-Control: must-revalidate


On Wed, Mar 24, 2010 at 10:01 AM, Vasili Batareykin <john2do@gmail.com> wrote:
> about keepalive:
> working:
> req:
> GET /uri HTTP/1.0
> Connection: Keep-Alive
> Host: somehost
> User-Agent: someagent
> Accept: */*
>
> reply:
> HTTP/1.1 200 OK
> Date: Wed, 24 Mar 2010 09:44:36 GMT
> Server: megaserver
> Last-Modified: Sun, 22 Jul 2007 17:00:00 GMT
> ETag: "4d6436-8cdd-435dd17316400"
> Accept-Ranges: bytes
> Content-Length: 36061
> Keep-Alive: timeout=15, max=100
> Connection: Keep-Alive
> Content-Type: somecontent/type
>
> mandory field in req is Connection: Keep-Alive.
> mandory(imho) fields in reply:
> Content-Length: 36061 (how much read ...)
> Keep-Alive: timeout=15, max=100 (timeouts)
> Connection: Keep-Alive (and this or "close" if not support this feature)
>
> but what we see in couchdb's reply to same req:
>
> HTTP/1.0 200 OK
> Server: CouchDB/0.10.0 (Erlang OTP/R13B)
> Date: Wed, 24 Mar 2010 09:37:27 GMT
> Content-Type: text/plain;charset=utf-8
> Content-Length: 41
> Cache-Control: must-revalidate
>
> Connection timeout n/a
> Connection type n/a
>
> i think RFC better source than my dump)
>
> in my case (replicated+vers file storage) erlang eats all CPU to process
> requests.
> mb there is better way to get files from couchdb? by version or by e-tag?
> or increase some kind buffers or memory pages?
> sorry i don't know internals.
>
> 2010/3/24 Randall Leeds <randall.leeds@gmail.com>
>
>> Yes, I do mean KeepAlive, sorry for the confusion.
>> CouchDB should support it. Can you show a dump of the headers received
>> by Couch somehow? Maybe there is something silly like an issue of case
>> with the headers.
>>
>> CouchDB cannot use sendfile (or some Erlang way to do the same)
>> because it puts bytes in the middle of the attachment on disk at
>> regular intervals. As I understand it, this is so that you can store
>> attachments that contain something like a database header without
>> breaking your database. Otherwise, storing something that looks like
>> the BTree header in an attachment could cause couch to open the
>> database incorrectly after an odd crash situation.
>>
>> This behavior maybe sounds very paranoid and strange, but one thing is
>> for sure: CouchDB is designed to be *robust*. Really, really, _really_
>> hard to lose your data.
>>
>> On Wed, Mar 24, 2010 at 01:01, Vasili Batareykin <john2do@gmail.com>
>> wrote:
>> > pipelining? you mean keepalive? ab hold test if you supply -k option (Use
>> > HTTP KeepAlive feature) seems that couchdb's httpd don't know about this)
>> > yes, throughput(in b/s) is better,but on localhost, if i use same test
>> with
>> > nginx i get around 1000 #/sec on 340k file. (344294.81 [Kbytes/sec]).
>> yes,
>> > nginx use sendfile for this operation. yes fs cache used too. but 70
>> #/sec
>> > with couchdb ...
>> >
>> > 2010/3/24 Randall Leeds <randall.leeds@gmail.com>
>> >
>> >> If you multiple (#/sec) by file size, are actually getting _better_
>> >> throughput with the larger files.
>> >> Do you know if ab command uses HTTP 1.1 pipelining? If not, HTTP
>> >> overhead would explain the extra time.
>> >>
>> >
>>
>

Mime
View raw message