incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From CGS <>
Subject Re: CouchDB Replication lacking resilience for many database
Date Tue, 11 Oct 2011 00:48:26 GMT

I am no expert, but I do have one or two design question and maybe one or
two suggestions (5000 continuous replication will overload your system for
1. Why don't you use more storage elements to break your DB in shards? This
way you can remove some pressure from your system and dissipate it through
other storage elements.
2. Why don't you use external triggers instead of continuous replication?
You can set _changes trigger to external processes which can do a buffering
which can be flushed via _bulk/parallel operations or on different ports (I
don't suppose that would solve the problem if the band in between the two
servers is broad enough).
Just keep in mind when you design the solution to your problem that your
bottleneck is not in CPU/RAM/CONNECTION, but in HDD (I was surprised to see
my HDD being almost too slow for a 2.4 MB/s download, but it's MS Windows
here :D ).


On Tue, Oct 11, 2011 at 3:18 AM, Adam Kocoloski <> wrote:

> On Oct 10, 2011, at 8:02 PM, Chris Stockton wrote:
> > Hello,
> >
> > On Mon, Oct 10, 2011 at 4:19 PM, Filipe David Manana
> > <> wrote:
> >> On Tue, Oct 11, 2011 at 12:03 AM, Chris Stockton
> >> <> wrote:
> >> Chris,
> >>
> >> That said work is in the'1.2.x' branch (and master).
> >> CouchDB recently migrated from SVN to GIT, see:
> >>
> >>
> >
> > Thank you very much for the response Filipe, do you possibly have any
> > documentation or more detailed summary on what these changes include
> > and possible benefits of them? I would love to hear about any tweaking
> > or replication tips you may have for our growth issues, perhaps you
> > could answer a basic question if nothing else: Do the changes in this
> > branch minimize the performance impact of continuous replication on
> > many databases?
> >
> > Regardless I plan on getting a build of that branch and doing some
> > testing of my own very soon.
> >
> > Thank you!
> >
> > -Chris
> I'm pretty sure that even in 1.2.x and master each replication with a
> remote source still requires one dedicated TCP connection to consume the
> _changes feed.  Replications with a local source have always been able to
> use a connection pool per host:port combination.  That's not to downplay the
> significance of the rewrite of the replicator in 1.2.x; Filipe put quite a
> lot of time into it.
> The link to "those darn errors" just pointed to the mbox browser for
> September 2011.  Do you have a more specific link?  Regards,
> Adam

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message