couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Randall Leeds <randall.le...@gmail.com>
Subject Re: perfomance?
Date Wed, 24 Mar 2010 08:12:34 GMT
Yes, I do mean KeepAlive, sorry for the confusion.
CouchDB should support it. Can you show a dump of the headers received
by Couch somehow? Maybe there is something silly like an issue of case
with the headers.

CouchDB cannot use sendfile (or some Erlang way to do the same)
because it puts bytes in the middle of the attachment on disk at
regular intervals. As I understand it, this is so that you can store
attachments that contain something like a database header without
breaking your database. Otherwise, storing something that looks like
the BTree header in an attachment could cause couch to open the
database incorrectly after an odd crash situation.

This behavior maybe sounds very paranoid and strange, but one thing is
for sure: CouchDB is designed to be *robust*. Really, really, _really_
hard to lose your data.

On Wed, Mar 24, 2010 at 01:01, Vasili Batareykin <john2do@gmail.com> wrote:
> pipelining? you mean keepalive? ab hold test if you supply -k option (Use
> HTTP KeepAlive feature) seems that couchdb's httpd don't know about this)
> yes, throughput(in b/s) is better,but on localhost, if i use same test with
> nginx i get around 1000 #/sec on 340k file. (344294.81 [Kbytes/sec]). yes,
> nginx use sendfile for this operation. yes fs cache used too. but 70 #/sec
> with couchdb ...
>
> 2010/3/24 Randall Leeds <randall.leeds@gmail.com>
>
>> If you multiple (#/sec) by file size, are actually getting _better_
>> throughput with the larger files.
>> Do you know if ab command uses HTTP 1.1 pipelining? If not, HTTP
>> overhead would explain the extra time.
>>
>

Mime
View raw message