Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E961EE200 for ; Tue, 15 Jan 2013 10:18:19 +0000 (UTC) Received: (qmail 43947 invoked by uid 500); 15 Jan 2013 10:18:15 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 43467 invoked by uid 500); 15 Jan 2013 10:18:14 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 43424 invoked by uid 99); 15 Jan 2013 10:18:13 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 15 Jan 2013 10:18:13 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=HTML_FONT_FACE_BAD,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of esggrupos@gmail.com designates 209.85.219.48 as permitted sender) Received: from [209.85.219.48] (HELO mail-oa0-f48.google.com) (209.85.219.48) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 15 Jan 2013 10:18:06 +0000 Received: by mail-oa0-f48.google.com with SMTP id h2so4928054oag.7 for ; Tue, 15 Jan 2013 02:17:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=d+qEy6/pN3SJ9uSxkit7aoQtdFtvsdmetoDElOf14UI=; b=Bh2vUPiFXyhomfqD3SDpW5YTdF1KEY/kJ8YXABWbXAJ5j0t3mcQDyAOg8+y73gw1sd kqPrrV1Xu0Hqt5D0zPQeC2QF5xi2Oy/rl4rDXvglKt9j5ub2u1Sb5Ixo5Qb6s9FOfWke biwL6MDVY/c8mHrhhjHaXKehzozqCxZoo8qhLSBCjK9JRjpRuZGZ/4JaIJ+nY9wUt+5S dGl3Y3vAvwSYmqdEhII1NG0xtZgTNGyysdg0vm/XcmZoUKiXem9tMtEIdjR/L6nAeCa4 4jYN/RaGBxaGBPrl8riSzgqB9SZRjp8ZP7HC+VMUacz7c+j6q3nAtzHEEKutviW61hWl gQtA== MIME-Version: 1.0 Received: by 10.60.31.8 with SMTP id w8mr55362780oeh.55.1358245065494; Tue, 15 Jan 2013 02:17:45 -0800 (PST) Received: by 10.76.86.67 with HTTP; Tue, 15 Jan 2013 02:17:45 -0800 (PST) In-Reply-To: References: Date: Tue, 15 Jan 2013 11:17:45 +0100 Message-ID: Subject: Re: question about ZKFC daemon From: ESGLinux To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=e89a8fb1ed54a5922204d351129f X-Virus-Checked: Checked by ClamAV on apache.org --e89a8fb1ed54a5922204d351129f Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable ok, Thats the origin of my confussion, I thought they were the same. I=B4m going to read this doc to bring me a bit of light about ZooKeeper.. thank you very much for your help, ESGLinux, 2013/1/15 Harsh J > No, ZooKeeper daemons =3D=3D http://zookeeper.apache.org. > > > On Tue, Jan 15, 2013 at 3:38 PM, ESGLinux wrote: > >> Hi Harsh, >> >> Now I=B4m confussed at all :-)))) >> >> as you pointed ZKFC runs only in the NN. That=B4s looks right. >> >> So, what are ZK peers (the odd number I=B4m looking for) and where I hav= e >> to run them? on another 3 nodes? >> >> As I can read from the previous url: >> >> In a typical deployment, ZooKeeper daemons are configured to run on thre= e >> or five nodes. Since ZooKeeper itself has light resource requirements, i= t >> is acceptable to collocate the ZooKeeper nodes on the same hardware as t= he >> HDFS NameNode and Standby Node. Many operators choose to deploy the thir= d >> ZooKeeper process on the same node as the YARN ResourceManager. It is >> advisable to configure the ZooKeeper nodes to store their data on separa= te >> disk drives from the HDFS metadata for best performance and isolation. >> >> Here, ZooKeeper daemons =3D ZKFC? >> >> >> Thanks >> >> ESGLinux, >> >> >> >> 2013/1/15 Harsh J >> >>> Hi, >>> >>> I fail to see your confusion. >>> >>> ZKFC !=3D ZK >>> >>> ZK is a quorum software, like QJM is. The ZK peers are to be run odd in >>> numbers, such as JNs are to be. >>> >>> ZKFC is something the NN needs for its Automatic Failover capability. I= t >>> is a client to ZK and thereby demands ZK's presence; for which the odd = # of >>> nodes is suggested. ZKFC itself is only to be run one per NN. >>> >>> >>> On Tue, Jan 15, 2013 at 3:23 PM, ESGLinux wrote: >>> >>>> Hi all, >>>> >>>> I=B4m only testing the new HA feature. I=B4m not in a production syste= m, >>>> >>>> Well, let=B4s talk about the number of nodes and the ZKFC daemons. >>>> >>>> In this url: >>>> >>>> https://ccp.cloudera.com/display/CDH4DOC/HDFS+High+Availability+Initia= l+Deployment#HDFSHighAvailabilityInitialDeployment-DeployingAutomaticFailov= er >>>> >>>> you can read: >>>> If you have configured automatic failover using the ZooKeeper >>>> FailoverController (ZKFC), you must install and start thezkfc daemon >>>> on >>>> each of the machines that runs a NameNode. >>>> >>>> So, the number of ZKFC daemons are two, but reading this url: >>>> >>>> >>>> http://archive.cloudera.com/cdh4/cdh/4/hadoop/hadoop-yarn/hadoop-yarn-= site/HDFSHighAvailabilityWithQJM.html#Deploying_ZooKeeper >>>> >>>> you can read this: >>>> In a typical deployment, ZooKeeper daemons are configured to run on >>>> three or five nodes >>>> >>>> I think that to ensure a good HA enviroment (of any kind) you need and >>>> odd number of nodes to avoid split-brain. The problem I see here is th= at If >>>> ZKFC monitors NameNodes in a CDH4 enviroment you only have 2 NN >>>> (active+standby). >>>> >>>> So I=B4m a bit confussed with this deployment... >>>> >>>> Any suggestion? >>>> >>>> Thanks in advance for all your answers >>>> >>>> Kind regards, >>>> >>>> ESGLinux >>>> >>>> >>>> >>>> >>>> 2013/1/14 Colin McCabe >>>> >>>>> On Mon, Jan 14, 2013 at 11:49 AM, Colin McCabe >>>>> wrote: >>>>> > Hi ESGLinux, >>>>> > >>>>> > In production, you need to run QJM on at least 3 nodes. You also >>>>> need >>>>> > to run ZKFC on at least 3 nodes. You can run them on the same node= s >>>>> > if you like, though. >>>>> >>>>> Er, this should read "You also need to run ZooKeeper on at least 3 >>>>> nodes." ZKFC, which talks to ZooKeeper, runs on only two nodes-- the >>>>> active NN node and the standby NN node. >>>>> >>>>> Colin >>>>> >>>>> > >>>>> > Of course, none of this is "needed" to set up an example cluster. = If >>>>> > you just want to try something out, you can run everything on the >>>>> same >>>>> > node if you want. It depends on what you're trying to do. >>>>> > >>>>> > cheers, >>>>> > Colin >>>>> > >>>>> > >>>>> > On Fri, Dec 28, 2012 at 3:02 AM, ESGLinux >>>>> wrote: >>>>> >> Thank you for your answer Craig, >>>>> >> >>>>> >> I=B4m planning my cluster and for now I=B4m not sure how many mach= ines >>>>> I need;-) >>>>> >> >>>>> >> If I have doubt i=B4ll what clouder say and If have a problem I ha= ve >>>>> where to >>>>> >> ask for explications :-) >>>>> >> >>>>> >> ESGLinux >>>>> >> >>>>> >> >>>>> >> >>>>> >> 2012/12/28 Craig Munro >>>>> >>> >>>>> >>> OK, I have reliable storage on my datanodes so not an issue for >>>>> me. If >>>>> >>> that's what Cloudera recommends then I'm sure it's fine. >>>>> >>> >>>>> >>> On Dec 28, 2012 10:38 AM, "ESGLinux" wrote: >>>>> >>>> >>>>> >>>> Hi Craig, >>>>> >>>> >>>>> >>>> I=B4m a bit confused, I have read this from cloudera: >>>>> >>>> >>>>> https://ccp.cloudera.com/display/CDH4DOC/Hardware+Configuration+for+Q= uorum-based+Storage >>>>> >>>> >>>>> >>>> The JournalNode daemon is relatively lightweight, so these >>>>> daemons can >>>>> >>>> reasonably be collocated on machines with other Hadoop daemons, >>>>> for example >>>>> >>>> NameNodes, the JobTracker, or the YARN ResourceManager. >>>>> >>>> Cloudera recommends that you deploy the JournalNode daemons on t= he >>>>> >>>> "master" host or hosts (NameNode, Standby NameNode, JobTracker, >>>>> etc.) so the >>>>> >>>> JournalNodes' local directories can use the reliable local >>>>> storage on those >>>>> >>>> machines. >>>>> >>>> There must be at least three JournalNode daemons, since edit log >>>>> >>>> modifications must be written to a majority of JournalNodes >>>>> >>>> >>>>> >>>> as you can read they recommend to put journalnode daemons with t= he >>>>> >>>> namenodes, but you say the opposite.??=BF?=BF?? >>>>> >>>> >>>>> >>>> >>>>> >>>> Thanks for your answer, >>>>> >>>> >>>>> >>>> ESGLinux, >>>>> >>>> >>>>> >>>> >>>>> >>>> >>>>> >>>> >>>>> >>>> 2012/12/28 Craig Munro >>>>> >>>>> >>>>> >>>>> You need the following: >>>>> >>>>> >>>>> >>>>> - active namenode + zkfc >>>>> >>>>> - standby namenode + zkfc >>>>> >>>>> - pool of journal nodes (odd number, 3 or more) >>>>> >>>>> - pool of zookeeper nodes (odd number, 3 or more) >>>>> >>>>> >>>>> >>>>> As the journal nodes hold the namesystem transactions they >>>>> should not be >>>>> >>>>> co-located with the namenodes in case of failure. I distribute >>>>> the journal >>>>> >>>>> and zookeeper nodes across the hosts running datanodes or as >>>>> Harsh says you >>>>> >>>>> could co-locate them on dedicated hosts. >>>>> >>>>> >>>>> >>>>> ZKFC does not monitor the JobTracker. >>>>> >>>>> >>>>> >>>>> Regards, >>>>> >>>>> Craig >>>>> >>>>> >>>>> >>>>> On Dec 28, 2012 9:25 AM, "ESGLinux" wrote= : >>>>> >>>>>> >>>>> >>>>>> Hi, >>>>> >>>>>> >>>>> >>>>>> well, If I have understand you I can configure my NN HA cluste= r >>>>> this >>>>> >>>>>> way: >>>>> >>>>>> >>>>> >>>>>> - Active NameNode + 1 ZKFC daemon + Journal Node >>>>> >>>>>> - Standby NameNode + 1 ZKFC daemon + Journal Node >>>>> >>>>>> - JobTracker node + 1 ZKFC daemon + Journal Node, >>>>> >>>>>> >>>>> >>>>>> Is this right? >>>>> >>>>>> >>>>> >>>>>> Thanks in advance, >>>>> >>>>>> >>>>> >>>>>> ESGLinux, >>>>> >>>>>> >>>>> >>>>>> 2012/12/27 Harsh J >>>>> >>>>>>> >>>>> >>>>>>> Hi, >>>>> >>>>>>> >>>>> >>>>>>> There are two different things here: Automatic Failover and >>>>> Quorum >>>>> >>>>>>> Journal Manager. The former, used via a ZooKeeper Failover >>>>> Controller, >>>>> >>>>>>> is to manage failovers automatically (based on health checks >>>>> of NNs). >>>>> >>>>>>> The latter, used via a set of Journal Nodes, is a medium of >>>>> shared >>>>> >>>>>>> storage for namesystem transactions that helps enable HA. >>>>> >>>>>>> >>>>> >>>>>>> In a typical deployment, you want 3 or more (odd) JournalNode= s >>>>> for >>>>> >>>>>>> reliable HA, preferably on nodes of their own if possible >>>>> (like you >>>>> >>>>>>> would for typical ZooKeepers, and you may co-locate with thos= e >>>>> as >>>>> >>>>>>> well) and one ZKFC for each NameNode (connected to the same Z= K >>>>> >>>>>>> quorum). >>>>> >>>>>>> >>>>> >>>>>>> On Thu, Dec 27, 2012 at 5:33 PM, ESGLinux >>>>> wrote: >>>>> >>>>>>> > Hi all, >>>>> >>>>>>> > >>>>> >>>>>>> > I have a doubt about how to deploy the Zookeeper in a NN HA >>>>> >>>>>>> > cluster, >>>>> >>>>>>> > >>>>> >>>>>>> > As far as I know, I need at least three nodes to run three >>>>> ZooKeeper >>>>> >>>>>>> > FailOver Controller (ZKFC). I plan to put these 3 daemons >>>>> this way: >>>>> >>>>>>> > >>>>> >>>>>>> > - Active NameNode + 1 ZKFC daemon >>>>> >>>>>>> > - Standby NameNode + 1 ZKFC daemon >>>>> >>>>>>> > - JobTracker node + 1 ZKFC daemon, (is this right?) >>>>> >>>>>>> > >>>>> >>>>>>> > so the quorum is formed with these three nodes. The nodes >>>>> that runs >>>>> >>>>>>> > a >>>>> >>>>>>> > namenode are right because the ZKFC monitors it, but what >>>>> does the >>>>> >>>>>>> > third >>>>> >>>>>>> > daemon? >>>>> >>>>>>> > >>>>> >>>>>>> > as I read from this url: >>>>> >>>>>>> > >>>>> >>>>>>> > >>>>> https://ccp.cloudera.com/display/CDH4DOC/Software+Configuration+for+Q= uorum-based+Storage#SoftwareConfigurationforQuorum-basedStorage-AutomaticFa= iloverConfiguration >>>>> >>>>>>> > >>>>> >>>>>>> > this daemons are only related with NameNodes, (Health >>>>> monitoring - >>>>> >>>>>>> > the ZKFC >>>>> >>>>>>> > pings its local NameNode on a periodic basis with a >>>>> health-check >>>>> >>>>>>> > command.) >>>>> >>>>>>> > so what does the third ZKFC? I used the jobtracker node but >>>>> I could >>>>> >>>>>>> > use >>>>> >>>>>>> > another node without any daemon on it... >>>>> >>>>>>> > >>>>> >>>>>>> > Thanks in advance, >>>>> >>>>>>> > >>>>> >>>>>>> > ESGLInux, >>>>> >>>>>>> > >>>>> >>>>>>> > >>>>> >>>>>>> > >>>>> >>>>>>> >>>>> >>>>>>> >>>>> >>>>>>> >>>>> >>>>>>> -- >>>>> >>>>>>> Harsh J >>>>> >>>>>> >>>>> >>>>>> >>>>> >>>> >>>>> >> >>>>> >>>> >>>> >>> >>> >>> -- >>> Harsh J >>> >> >> > > > -- > Harsh J > --e89a8fb1ed54a5922204d351129f Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable ok,=A0

Thats the origin of my confussion, I thought they= were the same.=A0
I=B4m going to read this doc to bring me a bit= of light about ZooKeeper..

thank you very much fo= r your help,=A0

ESGLinux,=A0


2013/1/15 Harsh J <harsh@cloudera.com>= ;
No, ZooKeeper daemons =3D= =3D http://zookee= per.apache.org.


On Tue, Jan 15, 2= 013 at 3:38 PM, ESGLinux <esggrupos@gmail.com> wrote:
Hi Harsh,=A0

Now I=B4m co= nfussed at all :-))))

as you pointed ZKFC runs onl= y in the NN. That=B4s looks right.=A0

So, what are ZK peers (the odd number I=B4m looking for= ) and where I have to run them? on another 3 nodes?

As I can read from the previous url:

In a typical deployment, ZooKeeper daemons are configured to run on= =20 three or five nodes. Since ZooKeeper itself has light resource=20 requirements, it is acceptable to collocate the ZooKeeper nodes on the=20 same hardware as the HDFS NameNode and Standby Node. Many operators=20 choose to deploy the third ZooKeeper process on the same node as the=20 YARN ResourceManager. It is advisable to configure the ZooKeeper nodes=20 to store their data on separate disk drives from the HDFS metadata for=20 best performance and isolation.

Here, =A0ZooKeeper= daemons =3D ZKFC?


Thanks=A0
<= span>

ESGLinux,=A0



2013/1/15 Harsh J <harsh@cloudera.com>
Hi,

I fail to see your confusion.
=

ZKFC !=3D ZK

ZK is a quorum so= ftware, like QJM is. The ZK peers are to be run odd in numbers, such as JNs= are to be.

ZKFC is something the NN needs for its Automatic Failov= er capability. It is a client to ZK and thereby demands ZK's presence; = for which the odd # of nodes is suggested. ZKFC itself is only to be run on= e per NN.


On Tue, Jan 15, 2013 at 3:23 PM, ESGLinux <esggrupos@gmail.com&g= t; wrote:
Hi all,=A0

I=B4m only tes= ting the new HA feature. I=B4m not in a production system,

Well, let=B4s talk about the number of nodes and the ZKFC daemons.=A0
=

In this url:

you can read:
If you have configured automatic failover using t= he ZooKeeper FailoverController (ZKFC), you must install and start thez= kfc=A0daemon on=A0
each of the m= achines that runs a NameNode.

= So, the number of ZKFC daemons are two, but reading this url:


=
you can read this:
In a typical deployment,= ZooKeeper daemons are configured to run on three or five nodes

=
I think that to ensure a= good HA enviroment (of any kind) you need and odd number of nodes to avoid= split-brain. The problem I see here is that If ZKFC monitors NameNodes in = a CDH4 enviroment you only have 2 NN (active+standby).=A0

=
So I=B4m a bit confussed= with this deployment...

=
Any suggestion?

=
Thanks in advance for al= l your answers

=
Kind regards,=A0<= /font>

=
ESGLinux




2013/1/14 Colin McCabe <cmccabe@alumni.cmu.edu>
On Mon, Jan 14, 2013 at 11:49 AM, Colin McCabe <cmccabe@alumni.cmu.edu> wro= te:
> Hi ESGLinux,
>
> In production, you need to run QJM on at least 3 nodes. =A0You also ne= ed
> to run ZKFC on at least 3 nodes. =A0You can run them on the same nodes=
> if you like, though.

Er, this should read "You also need to run ZooKeeper on at least= 3
nodes." =A0ZKFC, which talks to ZooKeeper, runs on only two nodes-- th= e
active NN node and the standby NN node.

Colin

>
> Of course, none of this is "needed" to set up an example clu= ster. =A0If
> you just want to try something out, you can run everything on the same=
> node if you want. =A0It depends on what you're trying to do.
>
> cheers,
> Colin
>
>
> On Fri, Dec 28, 2012 at 3:02 AM, ESGLinux <esggrupos@gmail.com> wrote:
>> Thank you for your answer Craig,
>>
>> I=B4m planning my cluster and for now I=B4m not sure how many mach= ines I need;-)
>>
>> If I have doubt i=B4ll what clouder say and If have a problem I ha= ve where to
>> ask for explications :-)
>>
>> ESGLinux
>>
>>
>>
>> 2012/12/28 Craig Munro <craig.munro@gmail.com>
>>>
>>> OK, I have reliable storage on my datanodes so not an issue fo= r me. =A0If
>>> that's what Cloudera recommends then I'm sure it's= fine.
>>>
>>> On Dec 28, 2012 10:38 AM, "ESGLinux" <esggrupos@gmail.com> w= rote:
>>>>
>>>> Hi Craig,
>>>>
>>>> I=B4m a bit confused, I have read this from cloudera:
>>>> https://ccp.cl= oudera.com/display/CDH4DOC/Hardware+Configuration+for+Quorum-based+Storage<= /a>
>>>>
>>>> The JournalNode daemon is relatively lightweight, so these= daemons can
>>>> reasonably be collocated on machines with other Hadoop dae= mons, for example
>>>> NameNodes, the JobTracker, or the YARN ResourceManager. >>>> Cloudera recommends that you deploy the JournalNode daemon= s on the
>>>> "master" host or hosts (NameNode, Standby NameNo= de, JobTracker, etc.) so the
>>>> JournalNodes' local directories can use the reliable l= ocal storage on those
>>>> machines.
>>>> There must be at least three JournalNode daemons, since ed= it log
>>>> modifications must be written to a majority of JournalNode= s
>>>>
>>>> as you can read they recommend to put journalnode daemons = with the
>>>> namenodes, but you say the opposite.??=BF?=BF??
>>>>
>>>>
>>>> Thanks for your answer,
>>>>
>>>> ESGLinux,
>>>>
>>>>
>>>>
>>>>
>>>> 2012/12/28 Craig Munro <
craig.munro@gmail.com>
>>>>>
>>>>> You need the following:
>>>>>
>>>>> - active namenode + zkfc
>>>>> - standby namenode + zkfc
>>>>> - pool of journal nodes (odd number, 3 or more)
>>>>> - pool of zookeeper nodes (odd number, 3 or more)
>>>>>
>>>>> As the journal nodes hold the namesystem transactions = they should not be
>>>>> co-located with the namenodes in case of failure. =A0I= distribute the journal
>>>>> and zookeeper nodes across the hosts running datanodes= or as Harsh says you
>>>>> could co-locate them on dedicated hosts.
>>>>>
>>>>> ZKFC does not monitor the JobTracker.
>>>>>
>>>>> Regards,
>>>>> Craig
>>>>>
>>>>> On Dec 28, 2012 9:25 AM, "ESGLinux" <esggrupos@gmail.com> wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> well, If I have understand you I can configure my = NN HA cluster this
>>>>>> way:
>>>>>>
>>>>>> - Active NameNode + 1 ZKFC daemon + Journal Node >>>>>> - Standby NameNode + 1 ZKFC daemon + Journal Node<= br> >>>>>> - JobTracker node + 1 ZKFC daemon + Journal Node,<= br> >>>>>>
>>>>>> Is this right?
>>>>>>
>>>>>> Thanks in advance,
>>>>>>
>>>>>> ESGLinux,
>>>>>>
>>>>>> 2012/12/27 Harsh J <
harsh@cloudera.com>
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> There are two different things here: Automatic= Failover and Quorum
>>>>>>> Journal Manager. The former, used via a ZooKee= per Failover Controller,
>>>>>>> is to manage failovers automatically (based on= health checks of NNs).
>>>>>>> The latter, used via a set of Journal Nodes, i= s a medium of shared
>>>>>>> storage for namesystem transactions that helps= enable HA.
>>>>>>>
>>>>>>> In a typical deployment, you want 3 or more (o= dd) JournalNodes for
>>>>>>> reliable HA, preferably on nodes of their own = if possible (like you
>>>>>>> would for typical ZooKeepers, and you may co-l= ocate with those as
>>>>>>> well) and one ZKFC for each NameNode (connecte= d to the same ZK
>>>>>>> quorum).
>>>>>>>
>>>>>>> On Thu, Dec 27, 2012 at 5:33 PM, ESGLinux <= esggrupos@gmail.co= m> wrote:
>>>>>>> > Hi all,
>>>>>>> >
>>>>>>> > I have a doubt about how to deploy the Zo= okeeper in a NN HA
>>>>>>> > cluster,
>>>>>>> >
>>>>>>> > As far as I know, I need at least three n= odes to run three ZooKeeper
>>>>>>> > FailOver Controller (ZKFC). I plan to put= these 3 daemons this way:
>>>>>>> >
>>>>>>> > - Active NameNode + 1 ZKFC daemon
>>>>>>> > - Standby NameNode + 1 ZKFC daemon
>>>>>>> > - JobTracker node + 1 ZKFC daemon, (is th= is right?)
>>>>>>> >
>>>>>>> > so the quorum is formed with these three = nodes. The nodes that runs
>>>>>>> > a
>>>>>>> > namenode are right because the ZKFC monit= ors it, but what does the
>>>>>>> > third
>>>>>>> > daemon?
>>>>>>> >
>>>>>>> > as I read from this url:
>>>>>>> >
>>>>>>> > https://ccp.cloudera.com/display/CDH4DOC/Software+Configuration+for+Quor= um-based+Storage#SoftwareConfigurationforQuorum-basedStorage-AutomaticFailo= verConfiguration
>>>>>>> >
>>>>>>> > this daemons are only related with NameNo= des, (Health monitoring -
>>>>>>> > the ZKFC
>>>>>>> > pings its local NameNode on a periodic ba= sis with a health-check
>>>>>>> > command.)
>>>>>>> > so what does the third ZKFC? I used the j= obtracker node but I could
>>>>>>> > use
>>>>>>> > another node without any daemon on it...<= br> >>>>>>> >
>>>>>>> > Thanks in advance,
>>>>>>> >
>>>>>>> > ESGLInux,
>>>>>>> >
>>>>>>> >
>>>>>>> >
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Harsh J
>>>>>>
>>>>>>
>>>>
>>




<= /div>--
Harsh J




<= /div>--
Harsh J

--e89a8fb1ed54a5922204d351129f--