couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexander Shorin <kxe...@gmail.com>
Subject Re: Setting up CouchDB 2.0
Date Wed, 24 Sep 2014 18:36:15 GMT
That makes some bits clear. Thanks Jan!
--
,,,^..^,,,


On Wed, Sep 24, 2014 at 5:54 PM, Jan Lehnardt <jan@apache.org> wrote:
>
> On 24 Sep 2014, at 15:39 , Alexander Shorin <kxepal@gmail.com> wrote:
>
>> On Wed, Sep 24, 2014 at 5:12 PM, Jan Lehnardt <jan@apache.org> wrote:
>>>>> 1. User sets up a new CouchDB 2.0 (A), starts it and goes to Faxuton
>>>>>   - `couchdb` prompts for admin user and pass, if none are configured,
rejects startup without
>>>>
>>>> I think password MUST be set manually in config file. Otherwise this
>>>> will be nothing different from Admin Party when there is a race
>>>> condition: who opens Fauxton first wins.
>>>
>>> `couchdb` is a shell script that runs before Erlang is booted up, so
>>> there is no race condition.
>>
>> Aha, so the workflow is:
>> 1. Install couchdb
>> 2. run `couchdb` with some param, set the password
>
> in 1.x land, we already have a shell script (unix and win) `couchdb` that does everything
to start beam(.smp). That same script now also needs to capture admin creds. There is no change
to how it works today.
>
>> 3. start the service
>
> This is implied in 3.
>
>> right? Not sure that this would be popular feature since it won't work
>> on Windows well, it's hard to automatize in comparison with config
>> file [admins].
>
> This is not to the exclusion of the existing methods, just an addition
> to force an admin user set up on first start. Something which we are
> still openly discussing at this point, but I don’t see a technical
> reason, to not have it. The automation is covered by accepting a
> config file, or env vars, or updating local.ini from your automation
> tools.
>
>
>>
>>
>> On Wed, Sep 24, 2014 at 5:12 PM, Jan Lehnardt <jan@apache.org> wrote:
>>>>> 2. Fauxton sees that it has no nodes connected to it yet (/nodes db empty
or non existent), so Fauxton shows a cluster setup screen (that we can go back later as well).
>>>>>   - *handwaving* something about a /_membership endpoint that I don’t
know much about, yet.
>>>>> 2.1. If the user just wants a single node couch, there is a button “Thanks
no cluster setup needed” and we go directly to step 5.
>>>>>
>>>>> 3. User sets up another CouchDB 2.0 (B) and notes its fully qualified
domain name or ip address (fqdn/ip)
>>>>> - and uses the same admin/pass combo as on node A.
>>>>>
>>>>> 4. User goes back to Fauxton on A and adds a node in the (fictitious
at this point) UI and enters node B’s fqdn/ip
>>>>> - Fauxton posts this info to e.g. /_setup on A, A then checks if the
node fqdn/ip is reachable, if not, returns an error, if yes, it creates the node doc in /nodes
>>>>> - The post to /_setup also configures the erlang cookie/shared secret
(this requires code that updates a shared cookie value (could be a UUID, or readable-secure
random password) but results in really nice end-user friendliness). This enables secure Erlang-level
communication that the cluster needs to do its work, without any end-user interaction.
>>>>> - a security consideration here is that a node is open to receive a shared
secret via HTTP, someone might be able to hijack an in-setup cluster node, requiring an admin/pass
on node-startup and making the /_setup endpoint admin-only should mitigate that.
>>>>>
>>>>> // Repeat steps 3. and 4. until all nodes are added
>>>>>
>>>>> 5. User clicks a “cluster setup done” button and goes back to the
regular Fauxton start page.
>>>>> - fauxton posts to /_setup with a body that indicates that the setup
is complete and node A creates /_users and /_replicator and whatever else is needed) in the
cluster, to get it into a production-ready state (*waves hands*)
>>>>> - this could also flip a bit “cluster setup done” in all nodes, not
yet sure what we’d use this for, though, maybe block all non-fauxton/setup traffic until
setup is done.
>>>>>
>>>>> There is some handwaving how the adding-nodes machinery works under the
hood, but I think that can be worked out later.
>>>>
>>>> Too much magic is applied for /_setup.
>>>
>>> It is not well defined yet, yes, but too much magic? Let’s design it and decide
then :)
>>>
>>> Also what would you do alternatively?
>>
>> Need to think about, but technically all or most part of setup magic
>> could be done on first node addition. I think it's obliviously that if
>> you adds node to other you're going to set up cluster with them.
>>
>>>> Is it the only resource which will verify availability of the node to add?
>>>
>>> Possibly, what other resources would you want to have to expose this information?
>>
>> /nodes database looks as good host of such feature: if you add node
>> there, it verifies and ensures that it's available and ready for join.
>> Otherwise returns HTTP error back on POST/PUT request. Yes, another
>> system database, but isn't it already such?
>
> /nodes is already how BigCouch does this. all /_setup would do is PUT a new
> doc in othernode:5984/nodes to set things up, as you would do manually. The
> only reason it needs to go through an endpoint is cross domain limitations.
>
>>
>>
>>>> How can I rollback my cluster CouchDB to single node?
>>>
>>> Can you do this today? Is this a feature we want?
>>
>> Don't know about the first and not sure about the second, but for sure
>> that's the case that users will produce and ask about. I don't feel
>> the recommendation "install second instance, replicate the data,
>> delete the first" will be welcome by them (:
>
> sure, we need to think more things through, but that’s a little bit
> outside the initial setup discussion. Of course we need to make it
> so all cases are covered eventually, but we’d have to do this regardless :)
>
> Best
> Jan
> --
>
>
>
>>
>>>> And can I run setup again after that?
>>>
>>> Again unspecified, see above.
>>
>> If something could be done, one day it will be done. Just curious.
>>
>>>> Can I protect my instance from being included into cluster by some other
node?
>>>
>>> Yes, e.g. don’t give the other node admin creds. Is this enough? I don’t
know. Maybe /_setup is also disabled after the “the cluster is ready” bit was flipped
and you need to disable that that manually again, to get access to /_setup again.
>>
>> Disabling /_setup may be a solution and for other points, yes.
>>
>> --
>> ,,,^..^,,,
>

Mime
View raw message