helix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ming Fang <mingf...@mac.com>
Subject Re: Custom Controller
Date Wed, 24 Jul 2013 12:02:08 GMT

On Jul 24, 2013, at 2:29 AM, kishore g <g.kishore@gmail.com> wrote:

> You can write a custom rebalancer. But its not clear to me how you would differentiate
between node coming up for the first time v/s current master failing.

I was going to store something in Zookeeper to record the fact that any Node was started.
 We will have an end of day scheduled job to clear those records.

> In general, its a good idea to avoid having logic that depends on the order of cluster
events that happen in the cluster. This will make it difficult to scale the cluster or increase
the number of partitions.

I agree with you about scaling.  But our goal is not to dynamically scale out, rather is to
create a MASTER/SLAVE set where the Nodes have a deterministic role.
Let say we have 20 physical machines, with 10 newer then the other 10 maybe due to our upgrade
cycle.
I want the MASTER to always run on the newer machines and the SLAVE always running on the
older machines.
Currently we have to schedule the MASTERS to come up first but that's not ideal.

> 
> How about this, Node2 always starts in disabled mode (call admin.disableNode at startup
before it connects to the cluster. After Node1 becomes the master, as part of Slave-->Master
transition enables Node2. This guarantees that Node2 always waits until it sees Node1 as Master.
> 
> Will this work for you ?

That might work. 
Although our code is identical between all the nodes.
We use a json file to describe our cluster. 
Is there a way to disable a node using the json file?

Thanks Kishore
--ming
Mime
View raw message