couchdb-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <>
Subject [Couchdb Wiki] Update of "ConfiguringDistributedSystems" by DaleJohnson
Date Mon, 07 Jul 2008 07:08:30 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Couchdb Wiki" for change notification.

The following page has been changed by DaleJohnson:

  This is a stub for a page to discuss how to actually get couchdb running in a distributed
+ == Editorial Notes ==
+ I doubt that the wiki is a good place to have this discussion.  The designers are most welcome
to take it onto the couchdb-dev email list.
+ If there is a Road Map document someplace that discusses at what stage certain possibly
unimplemented features are planned, perhaps someone could link it in here.
   * I see that there is replication via the 'replication' functionality on the http://localhost:5984/_utils
console interface, but how does one distribute a database across, say 10 hosts?
   * Is there a way to specify the number of copies of a piece of data?  (Presumes not all
hosts have copies of each piece of data)
   * Is there a piece of this that can be configured in the couch.ini file, such than when
the topology changes (ie. server add or removal) that things can be put back into sync?
+ Excerpts from the Architectural Document,
+ {{{
+ Using just the basic replication model, many traditionally single server database applications
can be made distributed with almost no extra work.
+ }}}
+  * Let's try to document this.  What do we mean by '''distributed'''?
+ Excerpts from the wiki FAQ
+ {{{
+ How Much Stuff can I Store in CouchDB?
+ With node partitioning, virtually unlimited. For a single database instance, the practical
scaling limits aren't yet known. 
+ }}}
+  * Implies that node partitioning is build into couchdb.  Otherwise it means that every
platform known to man supports a virtually unlimited amount of stuff.  All you'd have to do
is set up your own partitioning scheme ;)
+ === Distributed defined ===
+ Here's what some people might ''assume'' we mean by distributed data store:
+  * We (couchdb) have a client which will '''shard''' the data by key, and direct it to the
correct server (shard), such that the writes of the system will '''scale'''.  That is that
there are many ''writers'', in a collision-free update environment.
+  * Reads may scale if they outnumber the writes using some form of replication for read-only-clients.
+  * If a master data store node is lost, then the client (or some proxy mechanism) can switch
over to a new master data store, which is ''really up to date'' (ie. milliseconds), and the
client will continue without a hitch.

View raw message