hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: changing ha failover auto conf value
Date Thu, 22 Nov 2012 19:21:43 GMT

Losing a complete node (ZKFC plus NN) with a journal node (QJM)
configuration shouldn't be causing automatic failover to fail. Could
you post up both your NameNode and ZKFC logs somewhere we can take a

On Fri, Nov 23, 2012 at 12:41 AM, Quentin Ambard
<quentin.ambard@gmail.com> wrote:
> Hello,
> I have 2 namenodes in ha mode, running with 3 journal node, 3 zookeeper
> servers and 2 zkfc (one with each namenode)
> If a server with the activated namenode and a zkfc get both down, the single
> instance of zkfc can't activate the standby namenode.
> So I end with a single namenode in standby mode.
> I can try to activate it with the following :
> hdfs haadmin -transitionToActive nn1 --forcemanual
> But it's recommended to disable the automatic failover to avoid split-brain.
> To do so, i stop all my namenode and set the
> dfs.ha.automatic-failover.enabled property to false.
> However, restarting the namenode doesn't change this configuration, i'm
> still getting the same warning while trying to activate the namenode.
> How can I change this configuration value ?
> Do I really need to have 3 namenode to avoid this situation (namenode
> manually activation), or can I achieve a full-auto conf with only 2 namenode
> ?
> Thanks for your help
> --
> Quentin Ambard

Harsh J

View raw message