couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Simon Keary <SKe...@immersivetechnologies.com>
Subject RE: Modifying a cluster
Date Wed, 02 Nov 2016 03:16:50 GMT


Thanks Adam,

Sorry, yes, it was the "/_dbs" endpoint I meant.

That all makes sense. What your suggesting does seem simpler. I presume just removing the
node out of the load balancer temporarily has the same effect as putting it into "maintenance
mode"? I presume also that I could just have a two, instead of 3, node cluster with q=8, r=1,
w=1, n=2 and then remove the second node temporarily, do updates, and then add it back in?


I'm still confused why the PUT to "/_dbs/_global_changes" fails with "Only reserved document
ids may start with underscore"? Is there any other way to add nodes for the system databases?
I'm just thinking of the longer term issue of adding nodes to the cluster if we want to expand
the size for additional durability/performance. I can see how do it with the other user databases
but not the system databases.

Thanks,
Simon


-----Original Message-----
From: Adam Kocoloski [mailto:kocolosk@apache.org] 
Sent: Tuesday, 1 November 2016 10:44 PM
To: user@couchdb.apache.org
Subject: Re: Modifying a cluster

Hi Simon, that sounds more or less correct. I think you meant the “_dbs/<database_name>”
endpoint instead of “_all_dbs”.

I’d agree that the process you outlined is a lot of manual labor. This is part of the price
that we pay for having the flexibility to define a different sharding topology for each database
in the cluster.

I might suggest a somewhat different approach - run a 3 node, n=3 cluster, then put the nodes
into “maintenance mode” one at a time to patch and upgrade them. The maintenance mode
flag will allow the node to continue to participate in the cluster and receive updates, but
will prevent it from responding to the client until you determine that it’s healthy again.
Running n=3 ensures that you will always have two live nodes durably committing data at any
point in time. I appreciate that this may be more expensive than the n=2 model, but it’s
far simpler operationally (as you won’t have to modify the sharding setup at all) and is
a configuration that is much more extensively tested.

If you want to use this technique the relevant configuration setting is

[couchdb]
maintenance_mode = true

Cheers, Adam

> On Nov 1, 2016, at 3:36 AM, Simon Keary <SKeary@immersivetechnologies.com> wrote:
> 
> Hi All,
> 
> I have a two node cluster with the following configuration:
> 
> q=8, r=1, w=2, n=2
> 
> From time to time I want to be able to be able to patch/upgrade the servers by adding
two new nodes (servers) to the cluster and then removing the previous two. In this scenario
I think all nodes in the cluster at any time (2-4) should have copies of all shards of all
databases. My understanding is then to add a node I need to:
> 
> 1. Add the node to the list of cluster nodes via a PUT to /_nodes 2. 
> For each database update the /_all_dbs/<database_name> pseudo document. For each
shard in the document add the new node.
> 
> There are a few things I'm not clear of:
> 
> 1. Is this generally right? Assuming it is:
> 2. With a large amount of databases it seems impractical to manually add a node since
a document for each database will need to be modified and the modification isn't trivial.
At the moment I have a JS script to do this but wanted to check I'm not missing something?
> 3. I don't really understand how the system databases (_users, _metadata, _replication,
_global_changes) fit into the picture? It looks like I need to treat them as normal databases
and add all the shards for them to the new node? Doing a PUT to (for instance) _all_dbs/_global_changes
to do this fails with "Only reserved document ids may start with underscore" so I'm a little
confused...
> 
> Thanks for any help!
> Simon
> 
> ________________________________
> Disclaimer:
> This message contains confidential information and is intended only for the individual(s)
named. If you are not the named addressee you should not disseminate, distribute or copy this
email. Please immediately delete it and all copies of it from your system, destroy any hard
copies of it, and notify the sender. Email transmission cannot be guaranteed to be secure
or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late
or incomplete, or contain viruses. To the maximum extent permitted by law, Immersive Technologies
Pty. Ltd. does not accept liability for any errors or omissions in the contents of this message
which arise as a result of email transmission.
Mime
View raw message