incubator-couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nathan Vander Wilt <>
Subject Re: Post-mortem
Date Fri, 11 May 2012 15:54:48 GMT
On May 11, 2012, at 8:32 AM, Eli Stevens (Gmail) wrote:
> On Fri, May 11, 2012 at 7:57 AM, CGS <> wrote:
>> What I don't understand is the followings:
>> 1. Those guys wanted a single front-end server which should keep up with
>> the incoming requests, correct? As far as I understood, CouchDB philosophy
>> is based on safety of the data, which was implemented as direct writes on
>> harddisk. So, having only one front-end server on which you force its hdd
>> to to keep up with the high speed internet connection is just like you want
>> to force a river to flow only through a mouse hole.
> From my understanding of the post, the core issue wasn't a mismatch in
> scale between desired throughput (the river) and available throughput
> (the mousehole), it was that under high enough load CouchDB stopped
> honoring certain classes of requests entirely.  That's not a "too
> slow" problem, it's a "fell over and can't get up" problem.
> I think it's very important that effort is made to reproduce and
> address these issues, since without being able to put more definite
> bounds on them, *everyone* is going to wonder if their load is high
> enough to trigger the problems.  Heck, I'm wondering it, and I don't
> typically have more than a couple hundred docs per DB (but a lot of
> DBs, and hundreds of megs of attachments per DB).

Here's one with steps to reproduce, albeit requiring a public-facing server capable of routing
one special "I own this site" request to a separate :

That one found me within ten minutes of trying out (themselves a CouchDB user,
IIRC) but it's just sat since I filed it. Fortunately/unfortunately the site I could take
down at about ~40 concurrent requests gets about that many hits in a _month_, so I don't need
to switch to MySQL ;-)

View raw message