Return-Path: Delivered-To: apmail-couchdb-user-archive@www.apache.org Received: (qmail 16459 invoked from network); 6 Apr 2011 20:33:17 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 6 Apr 2011 20:33:17 -0000 Received: (qmail 76487 invoked by uid 500); 6 Apr 2011 20:33:16 -0000 Delivered-To: apmail-couchdb-user-archive@couchdb.apache.org Received: (qmail 76450 invoked by uid 500); 6 Apr 2011 20:33:15 -0000 Mailing-List: contact user-help@couchdb.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@couchdb.apache.org Delivered-To: mailing list user@couchdb.apache.org Received: (qmail 76442 invoked by uid 99); 6 Apr 2011 20:33:15 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Apr 2011 20:33:15 +0000 X-ASF-Spam-Status: No, hits=3.6 required=5.0 tests=FS_REPLICA,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [87.118.122.180] (HELO km32901-05.keymachine.de) (87.118.122.180) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Apr 2011 20:33:07 +0000 Received: from [192.168.1.205] (stgt-5f73c14e.pool.mediaWays.net [95.115.193.78]) by km32901-05.keymachine.de (Postfix) with ESMTPSA id 627607CCB1A for ; Wed, 6 Apr 2011 20:32:46 +0000 (UTC) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Apple Message framework v1084) Subject: Re: Peer-to-Peer Replication From: Christian Polzer In-Reply-To: Date: Wed, 6 Apr 2011 22:32:43 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <82DF83AA-B2A1-4837-B109-FF2F6F180290@hai-fai.de> References: <4D9CBEEB.3090102@facilityone.com> To: user@couchdb.apache.org X-Mailer: Apple Mail (2.1084) X-Virus-Checked: Checked by ClamAV on apache.org Regards, Chris On 06.04.2011, at 22:24, Zdravko Gligic wrote: >> *You* make the graph -- not CouchDB! >=20 > Given a large number of peers, could this not be a daunting task - to > ensure that everyone gets eventually updated in a relatively efficient > and timely manner? >=20 >> CouchDB will follow your instructions. No more or no less. >=20 > Am I correct in my interpretation of documentation that regardless of > the overall design, replication is always between 2 nodes - a source > and a target? In other words there is no way to throw at CouchDB > multile nodes as sources and/or destinations and it would magically > keep them all updated. >=20 I am now quoting from my soon to be finished thesis (yay! :-) ): Replication in CouchDB can be configured in multiple ways: * Replication can be pushed or pulled. This is very handy for = replication of mobile databases, where no fixed IP can be provided and = the telecommunication providers may prohibit the use of dynamic DNS. = Replication can just happen by pushing it towards other nodes. * Replication can be configured for Master-Slave or Peer-to-Peer. * Replication can be configured for continuous replication or just be = one-time triggered by the application if needed. Until version 1.2 of = CouchDB is released, replication settings are lost upon restarting = CouchDB. * Databases within one CouchDB can be replicated. This might be useful = to keep a copy of a database on another hard-drive within one CouchDB = node. * Replication can be filtered so not all information is replication.=20 * Documents can be named for replication. I agree on the thaught=20 "to throw at CouchDB multile nodes as sources and/or destinations and = it would magically keep them all updated." There is a feature coming for CouchDB with 1.2 that includes a dedicated = document (or database?) about replication settings. It would be nice to = be able to replicate that one as well... >> CouchDB doesn't ensure that the latest revision wins -- you are = expected >> to resolve conflicts in a way that makes sense for your application. >=20 > I understand this in context of revisions to a single document. > However, I was more curious about how it internally determined which > of 2 arbitrary peers had a more recent and up to date copy. >=20 > Thanks again.