couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jan Lehnardt <...@apache.org>
Subject Re: [PROPOSAL] Fauxton config and the new config API
Date Thu, 02 Jul 2015 20:31:51 GMT

> On 02 Jul 2015, at 10:12, Alexander Shorin <kxepal@gmail.com> wrote:
> 
> Hi Robert,
> 
> 
> On Wed, Jul 1, 2015 at 4:14 PM, Robert Kowalski <rok@kowalski.gd> wrote:
>> Here are the things I found out or I had to explain:
>> 
>> - the feature is not intended for more than 5, maybe 10 nodes as it is
>> not feasible for the user and also gets more and more error prone the
>> more nodes we have in the cluster (e.g. network partitions)
>> 
> 
> Indeed, at some point of nodes amount, you'll have to use different
> tools to manage config.
> There is always big difference in the workflow for "few" and "many" cases.
> 
>> - for all other settings the cluster is in a state where the configs
>> on the nodes are different, maybe up to 10 minutes for a 10 nodes
>> cluster that gets a new configuration manually using the UI by
>> clicking through the nodes. For a change of the Basic-Auth settings
>> that means that the user (developer using CouchDB) has to throw a lot
>> of code onto the client that uses CouchDB to handle the situation of
>> the inconsistent cluster
>> 
> 
> When you have a lot of nodes, you'll already have to use some DevOps
> magic or you're doomed. In this land, you'll unlikely will have any
> need to edit anything in config through Fauxton.
> 
>> - when we try to just update all nodes at once using multiple AJAX
>> requests the cluster is maybe inconsistent for a few seconds. While
>> this is also a problem it really gets a problem when we try to change
>> sections like the admin-config where Fauxton gets a 401 at some point.
>> The 401 happens as the node we are talking to with our JS already got
>> the new password and applied the change. This problem looks different
>> when talking directly to a node or talking to it behind a load
>> balancer (as the load balancer shuffles our requests to /_session)
>> 
> 
> Oh no, reinventing cluster-wide configuration on top of Fauxton is
> not a good way to go.
> 
>> Here is the proposal for the config section in Fauxton:
>> 
>> Detect if we are running in "Single Node Mode". This can be a N=0
>> setting which was set by the setup wizard that is coming to Fauxton if
>> the user chooses that they don't want to setup a cluster - or can also
>> be a node count of 1 in /_memberships.
>> 
>> Just if that is the case, we are displaying the config as we can
>> guarantee that the config and login is working for the user. If we
>> detect that we have multiple nodes we are displaying an info with our
>> suggested way to change the config for clusters.
>> 
>> For the case a node is not joined into a 50 nodes cluster yet there is
>> no use-case in using Fauxton for configuration as they will be managed
>> automatically, but even then an admin could use the UI to copy over
>> the config bits to the new node until it is joined. Until then and
>> also after the join (given the admin copied all config sections
>> properly) the UI stays usable (no random 401s)
>> 
>> The new endpoint would be still useful for ad-hoc HTTP queries to find
>> out the config of a given node. If it turns out to be unuseful we
>> could remove it later and learned more how our users (admins, devs
>> etc.) use CouchDB.
>> 
>> This way we can keep the config section for small setups which will
>> also be a fair amount of Couch 2 installations, provide a reliable UI
>> with the same high quality of the past and have a way to find out
>> configs for nodes using HTTP on the cluster interface.
>> 
> 
> I think you're trying to solve here two different cases:
> 1. Small setups, 1-3 nodes
> 2. Large setups, 10-...
> 
> Both have very different workflow.
> 
> Small setups doesn't requires any deploy automatisation, config update
> via HTTP interface is totally fine for them.
> Large setups requires more automatisation and here noone should dare
> to configure cluster via HTTP (until cluster-wide /_config)
> 
> It's also a mistake to implement cluster-wide configuration on top of
> Fauxton - you'd pretty well described basic problems of this solution.
> 
> So the solution here I see is quite simple:
> 1. User clicks on Config menu item on sidebar;
> 2. Fauxton shows it list of nodes it can configure with the warning
> that user have to configure each node independednly;
> 3. If CouchDB has more than 1 node, additionally warn about risks user
> takes on misconfiguring whole the cluster;
> 4. If there are a lot of nodes (more than 5 - chosen by fair dice
> roll)  then additionally suggest to use some tools like ansible,
> puppet etc;
> 5. On config show confirmation box where user have to type node FQDN
> against which it mades the edits - to save user from update wrong
> node;
> 6. Don't try to apply all the changes for all the nodes: it's not
> Fauxton problem, user was warned;
> 
> In the end:
> - Users of single node setups may still configure their CouchDB as
> they did in 1.x days;
> - Users of small setups may still configure their cluster via Fauxton,
> but annoying messages and confirmation boxes will try to hint that
> they're have to think better on what they are doing;
> - Users of  large setups will avoid this feature in anyway, but if
> they didn't - again, everyone received the warning;
> 
> Everyone now should be fine and happy. With cluster-wide /_config most
> of these issues will gone.
> 
> How do you feel with such plan?

Heya Alexander,

I really like your attempt to preserve this 1.x-ism for small clusters.

I’m not sure I feel really good about this though, for the same reasons
that Robert K outlined.

I’d be more comfortable in saying 2.0 does not have any config screen
in Fauxton and for 2.1 we figure out cluster-wide configuration and then
that gets a Fauxton UI.

For our 2.0 messaging then, we could explain that the 1.x-ism compatibility
release is 2.1 (or whenever this can land), so that people migrating from
1.x need to be aware of this limitation, or wait until 2.1.

Robert, I’m with you and Klaus, make it so! :)

Best
Jan
--



Mime
View raw message