couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Anderson <jch...@apache.org>
Subject Re: svn commit: r804427 - in /couchdb/trunk: etc/couchdb/default.ini.tpl.in share/www/script/test/delayed_commits.js src/couchdb/couch_db.erl src/couchdb/couch_httpd_db.erl
Date Sat, 15 Aug 2009 17:17:28 GMT
On Sat, Aug 15, 2009 at 9:45 AM, Adam Kocoloski<kocolosk@apache.org> wrote:
>
> I believe we should try really hard not to lose users' data.  With
> delayed_commits = true our durability story is basically the same as Redis'.
>  I think that would be surprising to most new users.  Best,
>
> Adam
>

One middle ground implementation that could work for throughput, would
be to use the batch=ok ets based storage, but instead of immediately
returning 202 Accepted, hold the connection open until the batch is
written, and return 201 Created after the batch is written. This would
allow the server to optimize batch size, without the client needing to
worry about things, and we could return 201 Created and maintain our
strong consistency guarantees.

I like the idea of being able to tune the batch size internally within
the server. This could allow CouchDB to automatically adjust for
performance without changing consistency guarantees, eg: run large
batches when under heavy load, but when accessed by a single user,
just do full_commits all the time.

Chris

-- 
Chris Anderson
http://jchrisa.net
http://couch.io

Mime
View raw message