couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From J Chris Anderson <jch...@gmail.com>
Subject Re: delayed_commits false
Date Mon, 05 Jul 2010 18:17:39 GMT
For a relatively sane look at the tradeoff's we're talking about, this is a good resource:

http://developer.postgresql.org/pgdocs/postgres/runtime-config-wal.html

I wish it was simple to write a heuristic which would detect single serialized client workloads,
and delay commits, but I don't think it is.

I lean (slightly) toward leaving delayed_commits = true because the worst case scenario, even
in the case of a crash, isn't data corruption, just lost data from the most recent activity.

It is also worth noting that there is an API to ensure_full_commit aside from the configuration
value, so if you have high-value data you are writing, you can call ensure_full_commit (or
use a header value to make the last PUT or POST operation force full commit)

I think this is worth discussing. I'm not strongly in favor of the delayed_commit=true setting,
but I do think it is slightly more user-friendly...

Chris

On Jul 5, 2010, at 10:02 AM, Mikeal Rogers wrote:

> For the concurrent performance tests I wrote in relaximation it's actually
> better to run with delayed_commits off because it measures the roundtrip
> time of all the concurrent clients.
> 
> The reason it's enabled by default is because of apache-bench and other
> single writer performance test tools. From what I've seen, it doesn't
> actually improve write performance under concurrent load and leads to a kind
> of blocking behavior when you start throwing too many writes at it than it
> can fsync in a second. The degradation in performance is pretty huge with
> this "blocking" in my concurrent tests.
> 
> I don't know of a lot of good concurrent performance test tools which is why
> I went and wrote one. But, it only tests CouchDB and people love to pick up
> one of these tools that tests a bunch of other dbs (poorly) and be like
> "CouchDB is slow" because they are using a single writer.
> 
> But, IMHO it's better to ship with more guarantees about consistency than
> optimized for crappy perf tools.
> 
> -Mikeal
> 
> On Mon, Jul 5, 2010 at 8:49 AM, Volker Mische <volker.mische@gmail.com>wrote:
> 
>> Hi All,
>> 
>> delayed_commits were enabled to have better performance especially for
>> single writers. The price you pay for is that you potentially lose up to one
>> second of writes in case of a crash.
>> 
>> Such a setting makes sense, though in my opinion it shouldn't be enabled by
>> default. I expect* that people running into performance issues at least take
>> a look at the README or a FAQ section somewhere. There the delayed_commit
>> setting could be pointed out.
>> 
>> I'd like to be able to say that on a vanilla CouchDB it's hard to lose
>> data, but I can't atm. I'm also well aware that there will be plenty of
>> performance tests when 1.0 is released and people will complain (if
>> delayed_commits would be set to false by default) that it is horrible slow.
>> Though safety of the data is more important for me.
>> 
>> If the only reason why delayed_commits is true by default are the
>> performance tests of some noobs, I really don't think it's a price worth
>> paying.
>> 
>> *I know that in reality people don't
>> 
>> I would like to see delayed_commits=false for 1.0
>> 
>> Cheers,
>> Volker
>> 


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message