couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jan Lehnardt <>
Subject Re: Setting up CouchDB 2.0
Date Sun, 26 Oct 2014 19:04:06 GMT
Hey all,

thanks all for your input! I have a revised proposal that I’d like to
share with you for vetting.

It is a bit simpler than the last one and it includes a more detailed
specification for the _setup endpoint that drives all this.

This proposal retains admin party mode. *If* we decide to remove admin
party, not much has to be changed for this proposal, so ignore that
particular bit.

Best to be read in monospace, here’s a copy:

N. End User Action
 - What happens behind the scenes.

1. Launch CouchDB with `$ couchdb`, or init.d, or any other way, exactly
like it is done in 1.x.x.
 - CouchDB launches and listens on

From here on, there are two paths, one is via Fauxton (a) the other is
using a HTTP endpoint (b). Fauxton just uses the HTTP endpoint in (b).
(b) can be used to set up a cluster programmatically.

2.a. Go to Fauxton. There is a “Cluster Setup” tab in the sidebar. Go
to the tab and get presented with a form that asks you to enter an admin
username, admin password and optionally a bind_address and port to bind
to publicly. Submit the form with the [Enable Cluster] button.

 - POST to /_setup with
     "action": "enable_cluster",
     "admin": {
       "user": "username",
       "pass": "password"
     ["bind_address": "xxxx",]
     ["port": yyyy]

 This sets up the admin user on the current node and binds to
 or the specified ip:port. Logs admin user into Fauxton automatically.

2.b. POST to /_setup as shown above.

Repeat on all nodes.
 - keep the same username/password everywhere.

3. Pick any one node, for simplicity use the first one, to be the
“setup coordination node”.
 - this is a “master” node that manages the setup and requires all
   other nodes to be able to see it and vice versa. Setup won’t work
   with unavailable nodes (duh). The notion of “master” will be gone
   once the setup is finished. At that point, the system has no
   master node. Ignore I ever said “master”.

a. Go to Fauxton / Cluster Setup, once we have enabled the cluster, the
UI shows an “Add Node” interface with the fields admin, and node:
 - POST to /_setup with
     "action": "add_node",
     "admin": { // should be auto-filled from Fauxton
       "user": "username",
       "pass": "password"
     "node": {
       "host": "hostname",
       ["port": 5984]

b. as in a, but without the Fauxton bits, just POST to /_setup
 - this request will do this:
  - on the “setup coordination node”:
   - check if we have an Erlang Cookie Secret. If not, generate
     a UUID and set the erlang cookie to to that UUID.
     // TBD: persist the cookie, so it survives restarts
   - make a POST request to the node specified in the body above
     using the admin credentials in the body above:
     POST to http://username:password@node_b:5984/_setup with:
       "action": "receive_cookie",
       "cookie": "<secretcookie>",
     // TBD: persist the cookie on node B, so it survives restarts

   - when the request to node B returns, we know the Erlang-level
     inter-cluster communication is enabled and we can start adding
     the node on the CouchDB level. To do that, the “setup
     coordination node” does this to it’s own HTTP endpoint:
     PUT /nodes/node_b:5984 or the same thing with internal APIs.

- Repeat for all nodes.
- Fauxton keeps a list of all set up nodes for users to see.

4.a. When all nodes are added, click the [Finish Cluster Setup] button
in Fauxton.
 - this does POST /_setup
     "action": "finish_setup"

b. Same as in a.

 - this manages the final setup bits, like creating the _users,
   _replicator and _db_updates endpoints and whatever else is needed.
   // TBD: collect what else is needed.

## The Setup Endpoint

This is not a REST-y endpoint, it is a simple state machine operated
by HTTP POST with JSON bodies that have an `action` field.

### State 1: No Cluster Enabled

This is right after starting a node for the first time, and any time
before the cluster is enabled as outlined above.

GET /_setup
{"state": "cluster_disabled"}

POST /_setup {"action":"enable_cluster"...} -> Transition to State 2
POST /_setup {"action":"enable_cluster"...} with empty admin user/pass or invalid host/post
or host/port not available -> Error
POST /_setup {"action":"anything_but_enable_cluster"...} -> Error

### State 2: Cluster enabled, admin user set, waiting for nodes to be added.

GET /_setup

POST /_setup {"action":"enable_cluster"...} -> Error
POST /_setup {"action":"add_node"...} -> Stay in State 2, but return "nodes":["node B"}]
on GET
POST /_setup {"action":"add_node"...} -> if target node not available, Error
POST /_setup {"action":"finish_cluster"} with no nodes set up -> Error
POST /_setup {"action":"finish_cluster"} -> Transition to State 3

### State 3: Cluster set up, all nodes operational

GET /_setup
{"state":"cluster_finished","nodes":["node a", "node b", ...]}

POST /_setup {"action":"enable_cluster"...} -> Error
POST /_setup {"action":"finish_cluster"...} -> Stay in State 3, do nothing
POST /_setup {"action":"add_node"...} -> Error
POST /_setup?i_know_what_i_am_doing=true {"action":"add_node"...} -> Add node, stay in
State 3.

// TBD: we need to persist the setup state somewhere.

* * *

So far.

I think this is simpler than the first try. It retains the simplicity
of getting started of 1.x.x. It is fully scriptable via HTTP. It has
the same setup-security properties as 1.x.x. It requires an admin account
for cluster setup, but only on the last possible occasion in the
process. It still fully hides the Erlang cluster / secure cookie setup
from the end-user.

Does this sound sensible to you? Am I missing anything aside from the
TBD bits, for which, if you have ideas, I’d love your input! :)

Are there any deal-breaker flaws in this? What can be made simpler or
more clear?

Note that all the HTTP endpoints and field names are just placeholders,
no need to bikeshed on these just yet :)

I’m looking forward to your feedback!


> On 25 Sep 2014, at 20:06 , Alexander Shorin <> wrote:
> On Thu, Sep 25, 2014 at 7:56 PM, Nick Pavlica <> wrote:
>> -The short version:
>> I would like to propose that CouchDB be developed and maintained separately
>> from a management GUI.”
> That's still would be possible. The main idea of Jan's proposal is to
> simplify the cluster setup process using Fauxton UI and magic /_setup
> helper. But that doesn't means that you won't be able to do the same
> manually from command line interface and using plain old text configs.
> Just a little bit more work to do in this case for you.
>> -The longer rambling version:
>> I would like to see CouchDB 2.x+ adopt a model that resembles that of Riak,
>> Cassandra, and others, where there is a core server and everything else is
>> optional.  It’s so easy to setup a Cassandra and Riak server or cluster
>> from the command line with just a bit of good documentation.  I really like
>> the fact that they are decoupled from an administrative tool like Futon or
>> Faxuton.  By decoupling the admin GUI’s from the database, it paves the way
>> for others to create new GUI tools, and reduces the effort to release new
>> database versions.  While taking a current build of the Master branch for a
>> spin, I was trying to use Futon only to discover that it wasn’t ready for
>> the changes made in 2.0.  After some discussion on IRC, I learned that it
>> would be replaced with Faxuton.  Once I left Futon behind, and used the
>> command line I was up and running.  In the end, it was much easier to work
>> from the command line than trying to work with an outdated tool that was
>> hurting more than helping.  This is not to say that Futon, and Faxuton
>> aren’t great tools, but they add additional complexity, and development
>> effort outside the core objective of a database.  Having a minimal database
>> core also allows administrators to have a reduced burden when it comes to
>> system administration because there are fewer system dependencies to
>> update, manage, and distribute.  On small systems it isn’t as big deal, but
>> the larger systems become, the harder they are to manage.  Additionally,
>> having to interact with a UI makes it harder to setup a cluster with
>> deployment tools like Chef, etc.  CoreOS, really highlights the need to
>> reduce the administrative burden when managing large systems.  Here is a
>> video that illistrates why/how CoreOS has striped down everthing but what
>> is needed (  While not exactly
>> relevant, it does convey the general idea.  It could be called CoreCouch,
>> CouchCore, or … :)
> CouchDB 2.0 has new project layout where each component lives in his
> own repository and roughly plug-able: you can easily turn off
> component you don't need (sure if they're not the core) and Fauxton or
> docs are the such.
> --
> ,,,^..^,,,

View raw message