helix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kishore g <g.kish...@gmail.com>
Subject Re: Custom Controller
Date Wed, 24 Jul 2013 16:57:28 GMT
I am guessing you are using AUTO mode execution where you provide a
preference list for each partition. If thats the case, then every one when
it starts can simply check if its the preferred one( first in list), if its
not it can disable itself.

The preferred node will be the only one that starts up in enabled state and
when it becomes master it can enable the remaining nodes.

You can get the preference list from idealstate.


thanks,
Kishore G


On Wed, Jul 24, 2013 at 5:02 AM, Ming Fang <mingfang@mac.com> wrote:

>
> On Jul 24, 2013, at 2:29 AM, kishore g <g.kishore@gmail.com> wrote:
>
> > You can write a custom rebalancer. But its not clear to me how you would
> differentiate between node coming up for the first time v/s current master
> failing.
>
> I was going to store something in Zookeeper to record the fact that any
> Node was started.  We will have an end of day scheduled job to clear those
> records.
>
> > In general, its a good idea to avoid having logic that depends on the
> order of cluster events that happen in the cluster. This will make it
> difficult to scale the cluster or increase the number of partitions.
>
> I agree with you about scaling.  But our goal is not to dynamically scale
> out, rather is to create a MASTER/SLAVE set where the Nodes have a
> deterministic role.
> Let say we have 20 physical machines, with 10 newer then the other 10
> maybe due to our upgrade cycle.
> I want the MASTER to always run on the newer machines and the SLAVE always
> running on the older machines.
> Currently we have to schedule the MASTERS to come up first but that's not
> ideal.
>
> >
> > How about this, Node2 always starts in disabled mode (call
> admin.disableNode at startup before it connects to the cluster. After Node1
> becomes the master, as part of Slave-->Master transition enables Node2.
> This guarantees that Node2 always waits until it sees Node1 as Master.
> >
> > Will this work for you ?
>
> That might work.
> Although our code is identical between all the nodes.
> We use a json file to describe our cluster.
> Is there a way to disable a node using the json file?
>
> Thanks Kishore
> --ming

Mime
View raw message