couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Newson <>
Subject Re: Best Choice for redundant production environment?
Date Wed, 17 Jul 2013 19:22:15 GMT
BigCouch provides redundancy and partitioning. By default, three
copies of the data, and uses a quorum mechanism which is generally
superior to three independent couchdb nodes inter-replicating.

And, of course, the BigCouch project is merging with CouchDB.


On 17 July 2013 19:59, Nick North <> wrote:
> I run a production environment with three nodes several thousand miles
> apart with full-mesh replication - it behaves beautifully, seamlessly
> recovering from network outages, and I don't touch it for months on end. My
> particular setup is multi-master, so every node is a production one, and
> clients swap to another node if their local one goes down. Document ids are
> designed to be globally unique so that they can never clash between nodes,
> and it happens that the app never needs to edit documents, so there is no
> chance of multiple edits colliding.
> Nick
> On 17 July 2013 19:41, Dan Santner <> wrote:
>> I'm about to put together my production environment using couchdb as the
>> backend.  I've been running my test environment on a single linux node
>> (couchdb ver 1.2) for about a year without even restarting it once!  That
>> activity has actually been more than I can imagine in our production
>> environment, however, I'm nervous about going into production running a
>> single node.
>> question to you guys is this?  Do I look into running big couch,
>> and does that even handle redundancy or just sharding?  Do I simply setup
>> two nodes and let them cross replicate?  Cross replication just seems ripe
>> for problems, but I've never tried it so I'm asking you all what you'd do.
>> My production traffic will not be high by any measure.  There will be
>> bursts of activity but as mentioned, nothing a single node hasn't been able
>> to handle so far.
>> Any experiences you guys have to share is appreciated.
>> Dan.

View raw message