couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Miles Fidelman <>
Subject Re: massive replication?
Date Mon, 26 Oct 2009 17:27:35 GMT
Paul Davis wrote:
> Miles,
> One the one hand, it sounds like you could solve this with XMPP and
> use CouchDB as a backing store. People have already connected RabbitMQ
> and CouchDB, I can't imagine that connecting ejabberd and CouchDB
> would be much harder. The pubsub extensions could very much be what
> you're wanting.
A couple of issues with this - XMPP requires connectivity, it's not 
really asynchronous message passing.  And, both XMPP and RabbitMQ are 
really hub&spokes, not distributed models.

I keep coming back to NNTP (USENET) as a model for many-to-many messaging:

- push a message into a newsgroup on any NNTP node subscribing to that 
- nodes exchange "I-have"/"You-have" on a regular basis
- message propagate to all subscribing nodes by essentially a flooding 
or epidemic routing mechanism
- pretty quickly, a message propagates to all nodes subscribing to the 
- lack of connectivity simply delays message propagation
- the whole system scales massively, and is very robust in the face of 
connectivity outages, node failures, etc. (messages can flow across 
multiple routes)

In some sense, what I'm thinking of would look a lot like:

- a group of CouchDB nodes all subscribe to a newsgroup
- each node publishes changes as messages to that newsgroup
- NNTP takes care of getting messages everywhere, eventually
- each node looks for incoming messages and applies them as changes
- use a shared key to secure things (note: some implementations of NNTP 
already support secure messaging)

A similar approach could be taken using:
- a distributed hash table as a message que (that's what spread and 
splines seem to do)
- the DIS or HLA protocols (used for distributed simulation - keeping 
multiple copies of a "world" synchronized)


In theory, there is no difference between theory and practice.
In practice, there is.   .... Yogi Berra

View raw message