Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B1C0BE1CE for ; Tue, 15 Jan 2013 10:08:35 +0000 (UTC) Received: (qmail 6796 invoked by uid 500); 15 Jan 2013 10:08:29 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 6711 invoked by uid 500); 15 Jan 2013 10:08:29 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 6697 invoked by uid 99); 15 Jan 2013 10:08:29 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 15 Jan 2013 10:08:29 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=HTML_FONT_FACE_BAD,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of esggrupos@gmail.com designates 209.85.214.169 as permitted sender) Received: from [209.85.214.169] (HELO mail-ob0-f169.google.com) (209.85.214.169) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 15 Jan 2013 10:08:24 +0000 Received: by mail-ob0-f169.google.com with SMTP id v19so4932666obq.0 for ; Tue, 15 Jan 2013 02:08:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=myHjv+z2aNnsMb7CrX5+eJw2yL0IRhwH5DSTlMfGF8E=; b=bUwRWdIkjLw7g7zgSMb1joGTTMCLfOAYixGWhi4my6Y7ILb98/uVrTWSOo5dOw/erT /K9GCNJOgmG3d9W11SPsznJQBY3dCRU4QcPowzE6CeVvV+U69OME9eSB1E95jqKnw/bu XJPVG3tPGi+/NqbzU7yMJzRNS9px/8XpQO4HvMX3s47LpBiJZVlfRiBSi1EII3AgvxKC U7ng7pxCkwltwwQC5ehP57yq5t2DNJ2b5P4iMDvBgK2qsAM3fyqklbv2J27DS3f1Yq7u ZoG3ROfvZMi2J6iCe/YvF5kKi2BLbdkoZA0nyqlxC/J53ssuQrJpXEVIScFZwF1O4Us1 sOgg== MIME-Version: 1.0 Received: by 10.60.1.168 with SMTP id 8mr35062438oen.46.1358244483792; Tue, 15 Jan 2013 02:08:03 -0800 (PST) Received: by 10.76.86.67 with HTTP; Tue, 15 Jan 2013 02:08:03 -0800 (PST) In-Reply-To: References: Date: Tue, 15 Jan 2013 11:08:03 +0100 Message-ID: Subject: Re: question about ZKFC daemon From: ESGLinux To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=e89a8fb1f2aaf9804c04d350ef6a X-Virus-Checked: Checked by ClamAV on apache.org --e89a8fb1f2aaf9804c04d350ef6a Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi Harsh, Now I=B4m confussed at all :-)))) as you pointed ZKFC runs only in the NN. That=B4s looks right. So, what are ZK peers (the odd number I=B4m looking for) and where I have t= o run them? on another 3 nodes? As I can read from the previous url: In a typical deployment, ZooKeeper daemons are configured to run on three or five nodes. Since ZooKeeper itself has light resource requirements, it is acceptable to collocate the ZooKeeper nodes on the same hardware as the HDFS NameNode and Standby Node. Many operators choose to deploy the third ZooKeeper process on the same node as the YARN ResourceManager. It is advisable to configure the ZooKeeper nodes to store their data on separate disk drives from the HDFS metadata for best performance and isolation. Here, ZooKeeper daemons =3D ZKFC? Thanks ESGLinux, 2013/1/15 Harsh J > Hi, > > I fail to see your confusion. > > ZKFC !=3D ZK > > ZK is a quorum software, like QJM is. The ZK peers are to be run odd in > numbers, such as JNs are to be. > > ZKFC is something the NN needs for its Automatic Failover capability. It > is a client to ZK and thereby demands ZK's presence; for which the odd # = of > nodes is suggested. ZKFC itself is only to be run one per NN. > > > On Tue, Jan 15, 2013 at 3:23 PM, ESGLinux wrote: > >> Hi all, >> >> I=B4m only testing the new HA feature. I=B4m not in a production system, >> >> Well, let=B4s talk about the number of nodes and the ZKFC daemons. >> >> In this url: >> >> https://ccp.cloudera.com/display/CDH4DOC/HDFS+High+Availability+Initial+= Deployment#HDFSHighAvailabilityInitialDeployment-DeployingAutomaticFailover >> >> you can read: >> If you have configured automatic failover using the ZooKeeper >> FailoverController (ZKFC), you must install and start thezkfc daemon on >> each of the machines that runs a NameNode. >> >> So, the number of ZKFC daemons are two, but reading this url: >> >> >> http://archive.cloudera.com/cdh4/cdh/4/hadoop/hadoop-yarn/hadoop-yarn-si= te/HDFSHighAvailabilityWithQJM.html#Deploying_ZooKeeper >> >> you can read this: >> In a typical deployment, ZooKeeper daemons are configured to run on thre= e >> or five nodes >> >> I think that to ensure a good HA enviroment (of any kind) you need and >> odd number of nodes to avoid split-brain. The problem I see here is that= If >> ZKFC monitors NameNodes in a CDH4 enviroment you only have 2 NN >> (active+standby). >> >> So I=B4m a bit confussed with this deployment... >> >> Any suggestion? >> >> Thanks in advance for all your answers >> >> Kind regards, >> >> ESGLinux >> >> >> >> >> 2013/1/14 Colin McCabe >> >>> On Mon, Jan 14, 2013 at 11:49 AM, Colin McCabe >>> wrote: >>> > Hi ESGLinux, >>> > >>> > In production, you need to run QJM on at least 3 nodes. You also nee= d >>> > to run ZKFC on at least 3 nodes. You can run them on the same nodes >>> > if you like, though. >>> >>> Er, this should read "You also need to run ZooKeeper on at least 3 >>> nodes." ZKFC, which talks to ZooKeeper, runs on only two nodes-- the >>> active NN node and the standby NN node. >>> >>> Colin >>> >>> > >>> > Of course, none of this is "needed" to set up an example cluster. If >>> > you just want to try something out, you can run everything on the sam= e >>> > node if you want. It depends on what you're trying to do. >>> > >>> > cheers, >>> > Colin >>> > >>> > >>> > On Fri, Dec 28, 2012 at 3:02 AM, ESGLinux wrote= : >>> >> Thank you for your answer Craig, >>> >> >>> >> I=B4m planning my cluster and for now I=B4m not sure how many machin= es I >>> need;-) >>> >> >>> >> If I have doubt i=B4ll what clouder say and If have a problem I have >>> where to >>> >> ask for explications :-) >>> >> >>> >> ESGLinux >>> >> >>> >> >>> >> >>> >> 2012/12/28 Craig Munro >>> >>> >>> >>> OK, I have reliable storage on my datanodes so not an issue for me. >>> If >>> >>> that's what Cloudera recommends then I'm sure it's fine. >>> >>> >>> >>> On Dec 28, 2012 10:38 AM, "ESGLinux" wrote: >>> >>>> >>> >>>> Hi Craig, >>> >>>> >>> >>>> I=B4m a bit confused, I have read this from cloudera: >>> >>>> >>> https://ccp.cloudera.com/display/CDH4DOC/Hardware+Configuration+for+Quo= rum-based+Storage >>> >>>> >>> >>>> The JournalNode daemon is relatively lightweight, so these daemons >>> can >>> >>>> reasonably be collocated on machines with other Hadoop daemons, fo= r >>> example >>> >>>> NameNodes, the JobTracker, or the YARN ResourceManager. >>> >>>> Cloudera recommends that you deploy the JournalNode daemons on the >>> >>>> "master" host or hosts (NameNode, Standby NameNode, JobTracker, >>> etc.) so the >>> >>>> JournalNodes' local directories can use the reliable local storage >>> on those >>> >>>> machines. >>> >>>> There must be at least three JournalNode daemons, since edit log >>> >>>> modifications must be written to a majority of JournalNodes >>> >>>> >>> >>>> as you can read they recommend to put journalnode daemons with the >>> >>>> namenodes, but you say the opposite.??=BF?=BF?? >>> >>>> >>> >>>> >>> >>>> Thanks for your answer, >>> >>>> >>> >>>> ESGLinux, >>> >>>> >>> >>>> >>> >>>> >>> >>>> >>> >>>> 2012/12/28 Craig Munro >>> >>>>> >>> >>>>> You need the following: >>> >>>>> >>> >>>>> - active namenode + zkfc >>> >>>>> - standby namenode + zkfc >>> >>>>> - pool of journal nodes (odd number, 3 or more) >>> >>>>> - pool of zookeeper nodes (odd number, 3 or more) >>> >>>>> >>> >>>>> As the journal nodes hold the namesystem transactions they should >>> not be >>> >>>>> co-located with the namenodes in case of failure. I distribute >>> the journal >>> >>>>> and zookeeper nodes across the hosts running datanodes or as Hars= h >>> says you >>> >>>>> could co-locate them on dedicated hosts. >>> >>>>> >>> >>>>> ZKFC does not monitor the JobTracker. >>> >>>>> >>> >>>>> Regards, >>> >>>>> Craig >>> >>>>> >>> >>>>> On Dec 28, 2012 9:25 AM, "ESGLinux" wrote: >>> >>>>>> >>> >>>>>> Hi, >>> >>>>>> >>> >>>>>> well, If I have understand you I can configure my NN HA cluster >>> this >>> >>>>>> way: >>> >>>>>> >>> >>>>>> - Active NameNode + 1 ZKFC daemon + Journal Node >>> >>>>>> - Standby NameNode + 1 ZKFC daemon + Journal Node >>> >>>>>> - JobTracker node + 1 ZKFC daemon + Journal Node, >>> >>>>>> >>> >>>>>> Is this right? >>> >>>>>> >>> >>>>>> Thanks in advance, >>> >>>>>> >>> >>>>>> ESGLinux, >>> >>>>>> >>> >>>>>> 2012/12/27 Harsh J >>> >>>>>>> >>> >>>>>>> Hi, >>> >>>>>>> >>> >>>>>>> There are two different things here: Automatic Failover and >>> Quorum >>> >>>>>>> Journal Manager. The former, used via a ZooKeeper Failover >>> Controller, >>> >>>>>>> is to manage failovers automatically (based on health checks of >>> NNs). >>> >>>>>>> The latter, used via a set of Journal Nodes, is a medium of >>> shared >>> >>>>>>> storage for namesystem transactions that helps enable HA. >>> >>>>>>> >>> >>>>>>> In a typical deployment, you want 3 or more (odd) JournalNodes >>> for >>> >>>>>>> reliable HA, preferably on nodes of their own if possible (like >>> you >>> >>>>>>> would for typical ZooKeepers, and you may co-locate with those = as >>> >>>>>>> well) and one ZKFC for each NameNode (connected to the same ZK >>> >>>>>>> quorum). >>> >>>>>>> >>> >>>>>>> On Thu, Dec 27, 2012 at 5:33 PM, ESGLinux >>> wrote: >>> >>>>>>> > Hi all, >>> >>>>>>> > >>> >>>>>>> > I have a doubt about how to deploy the Zookeeper in a NN HA >>> >>>>>>> > cluster, >>> >>>>>>> > >>> >>>>>>> > As far as I know, I need at least three nodes to run three >>> ZooKeeper >>> >>>>>>> > FailOver Controller (ZKFC). I plan to put these 3 daemons thi= s >>> way: >>> >>>>>>> > >>> >>>>>>> > - Active NameNode + 1 ZKFC daemon >>> >>>>>>> > - Standby NameNode + 1 ZKFC daemon >>> >>>>>>> > - JobTracker node + 1 ZKFC daemon, (is this right?) >>> >>>>>>> > >>> >>>>>>> > so the quorum is formed with these three nodes. The nodes tha= t >>> runs >>> >>>>>>> > a >>> >>>>>>> > namenode are right because the ZKFC monitors it, but what doe= s >>> the >>> >>>>>>> > third >>> >>>>>>> > daemon? >>> >>>>>>> > >>> >>>>>>> > as I read from this url: >>> >>>>>>> > >>> >>>>>>> > >>> https://ccp.cloudera.com/display/CDH4DOC/Software+Configuration+for+Quo= rum-based+Storage#SoftwareConfigurationforQuorum-basedStorage-AutomaticFail= overConfiguration >>> >>>>>>> > >>> >>>>>>> > this daemons are only related with NameNodes, (Health >>> monitoring - >>> >>>>>>> > the ZKFC >>> >>>>>>> > pings its local NameNode on a periodic basis with a >>> health-check >>> >>>>>>> > command.) >>> >>>>>>> > so what does the third ZKFC? I used the jobtracker node but I >>> could >>> >>>>>>> > use >>> >>>>>>> > another node without any daemon on it... >>> >>>>>>> > >>> >>>>>>> > Thanks in advance, >>> >>>>>>> > >>> >>>>>>> > ESGLInux, >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> > >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> >>> >>>>>>> -- >>> >>>>>>> Harsh J >>> >>>>>> >>> >>>>>> >>> >>>> >>> >> >>> >> >> > > > -- > Harsh J > --e89a8fb1f2aaf9804c04d350ef6a Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi Harsh,=A0

Now I=B4m confussed at all :-))))

as you pointed ZKFC runs only in the NN. That=B4s looks rig= ht.=A0

So, what are ZK peers (the odd number I=B4m= looking for) and where I have to run them? on another 3 nodes?

As I can read from the previous url:

In a typical deployment, ZooKeeper daemons are configured to run on= =20 three or five nodes. Since ZooKeeper itself has light resource=20 requirements, it is acceptable to collocate the ZooKeeper nodes on the=20 same hardware as the HDFS NameNode and Standby Node. Many operators=20 choose to deploy the third ZooKeeper process on the same node as the=20 YARN ResourceManager. It is advisable to configure the ZooKeeper nodes=20 to store their data on separate disk drives from the HDFS metadata for=20 best performance and isolation.

Here, =A0ZooKeeper= daemons =3D ZKFC?


Thanks=A0
<= div>
ESGLinux,=A0



2013/1/15 Harsh J <harsh@cloudera.com>
Hi,

I fail to see your confusion.
=

ZKFC !=3D ZK

ZK is a quorum so= ftware, like QJM is. The ZK peers are to be run odd in numbers, such as JNs= are to be.

ZKFC is something the NN needs for its Automatic Failov= er capability. It is a client to ZK and thereby demands ZK's presence; = for which the odd # of nodes is suggested. ZKFC itself is only to be run on= e per NN.


On Tue, Jan 15, 2013 at 3:23 PM, ESGLinux <esggrupos@gm= ail.com> wrote:
Hi all,=A0

I=B4m only tes= ting the new HA feature. I=B4m not in a production system,

Well, let=B4s talk about the number of nodes and the ZKFC daemons.=A0
=

In this url:

you can read:
If you have configured automatic failover using t= he ZooKeeper FailoverController (ZKFC), you must install and start thez= kfc=A0daemon on=A0
each of the m= achines that runs a NameNode.

= So, the number of ZKFC daemons are two, but reading this url:


=
you can read this:
In a typical deployment,= ZooKeeper daemons are configured to run on three or five nodes

=
I think that to ensure a= good HA enviroment (of any kind) you need and odd number of nodes to avoid= split-brain. The problem I see here is that If ZKFC monitors NameNodes in = a CDH4 enviroment you only have 2 NN (active+standby).=A0

=
So I=B4m a bit confussed= with this deployment...

=
Any suggestion?

=
Thanks in advance for al= l your answers

=
Kind regards,=A0<= /font>

=
ESGLinux




2013/1/14 Colin McCabe <cmccabe@alumni.cmu.edu>
On Mon, Jan 14, 2013 at 11:49 AM, Colin McCabe <cmccabe@alumni.cmu.edu> wro= te:
> Hi ESGLinux,
>
> In production, you need to run QJM on at least 3 nodes. =A0You also ne= ed
> to run ZKFC on at least 3 nodes. =A0You can run them on the same nodes=
> if you like, though.

Er, this should read "You also need to run ZooKeeper on at least= 3
nodes." =A0ZKFC, which talks to ZooKeeper, runs on only two nodes-- th= e
active NN node and the standby NN node.

Colin

>
> Of course, none of this is "needed" to set up an example clu= ster. =A0If
> you just want to try something out, you can run everything on the same=
> node if you want. =A0It depends on what you're trying to do.
>
> cheers,
> Colin
>
>
> On Fri, Dec 28, 2012 at 3:02 AM, ESGLinux <esggrupos@gmail.com> wrote:
>> Thank you for your answer Craig,
>>
>> I=B4m planning my cluster and for now I=B4m not sure how many mach= ines I need;-)
>>
>> If I have doubt i=B4ll what clouder say and If have a problem I ha= ve where to
>> ask for explications :-)
>>
>> ESGLinux
>>
>>
>>
>> 2012/12/28 Craig Munro <craig.munro@gmail.com>
>>>
>>> OK, I have reliable storage on my datanodes so not an issue fo= r me. =A0If
>>> that's what Cloudera recommends then I'm sure it's= fine.
>>>
>>> On Dec 28, 2012 10:38 AM, "ESGLinux" <esggrupos@gmail.com> w= rote:
>>>>
>>>> Hi Craig,
>>>>
>>>> I=B4m a bit confused, I have read this from cloudera:
>>>> https://ccp.cl= oudera.com/display/CDH4DOC/Hardware+Configuration+for+Quorum-based+Storage<= /a>
>>>>
>>>> The JournalNode daemon is relatively lightweight, so these= daemons can
>>>> reasonably be collocated on machines with other Hadoop dae= mons, for example
>>>> NameNodes, the JobTracker, or the YARN ResourceManager. >>>> Cloudera recommends that you deploy the JournalNode daemon= s on the
>>>> "master" host or hosts (NameNode, Standby NameNo= de, JobTracker, etc.) so the
>>>> JournalNodes' local directories can use the reliable l= ocal storage on those
>>>> machines.
>>>> There must be at least three JournalNode daemons, since ed= it log
>>>> modifications must be written to a majority of JournalNode= s
>>>>
>>>> as you can read they recommend to put journalnode daemons = with the
>>>> namenodes, but you say the opposite.??=BF?=BF??
>>>>
>>>>
>>>> Thanks for your answer,
>>>>
>>>> ESGLinux,
>>>>
>>>>
>>>>
>>>>
>>>> 2012/12/28 Craig Munro <
craig.munro@gmail.com>
>>>>>
>>>>> You need the following:
>>>>>
>>>>> - active namenode + zkfc
>>>>> - standby namenode + zkfc
>>>>> - pool of journal nodes (odd number, 3 or more)
>>>>> - pool of zookeeper nodes (odd number, 3 or more)
>>>>>
>>>>> As the journal nodes hold the namesystem transactions = they should not be
>>>>> co-located with the namenodes in case of failure. =A0I= distribute the journal
>>>>> and zookeeper nodes across the hosts running datanodes= or as Harsh says you
>>>>> could co-locate them on dedicated hosts.
>>>>>
>>>>> ZKFC does not monitor the JobTracker.
>>>>>
>>>>> Regards,
>>>>> Craig
>>>>>
>>>>> On Dec 28, 2012 9:25 AM, "ESGLinux" <esggrupos@gmail.com> wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> well, If I have understand you I can configure my = NN HA cluster this
>>>>>> way:
>>>>>>
>>>>>> - Active NameNode + 1 ZKFC daemon + Journal Node >>>>>> - Standby NameNode + 1 ZKFC daemon + Journal Node<= br> >>>>>> - JobTracker node + 1 ZKFC daemon + Journal Node,<= br> >>>>>>
>>>>>> Is this right?
>>>>>>
>>>>>> Thanks in advance,
>>>>>>
>>>>>> ESGLinux,
>>>>>>
>>>>>> 2012/12/27 Harsh J <
harsh@cloudera.com>
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> There are two different things here: Automatic= Failover and Quorum
>>>>>>> Journal Manager. The former, used via a ZooKee= per Failover Controller,
>>>>>>> is to manage failovers automatically (based on= health checks of NNs).
>>>>>>> The latter, used via a set of Journal Nodes, i= s a medium of shared
>>>>>>> storage for namesystem transactions that helps= enable HA.
>>>>>>>
>>>>>>> In a typical deployment, you want 3 or more (o= dd) JournalNodes for
>>>>>>> reliable HA, preferably on nodes of their own = if possible (like you
>>>>>>> would for typical ZooKeepers, and you may co-l= ocate with those as
>>>>>>> well) and one ZKFC for each NameNode (connecte= d to the same ZK
>>>>>>> quorum).
>>>>>>>
>>>>>>> On Thu, Dec 27, 2012 at 5:33 PM, ESGLinux <= esggrupos@gmail.co= m> wrote:
>>>>>>> > Hi all,
>>>>>>> >
>>>>>>> > I have a doubt about how to deploy the Zo= okeeper in a NN HA
>>>>>>> > cluster,
>>>>>>> >
>>>>>>> > As far as I know, I need at least three n= odes to run three ZooKeeper
>>>>>>> > FailOver Controller (ZKFC). I plan to put= these 3 daemons this way:
>>>>>>> >
>>>>>>> > - Active NameNode + 1 ZKFC daemon
>>>>>>> > - Standby NameNode + 1 ZKFC daemon
>>>>>>> > - JobTracker node + 1 ZKFC daemon, (is th= is right?)
>>>>>>> >
>>>>>>> > so the quorum is formed with these three = nodes. The nodes that runs
>>>>>>> > a
>>>>>>> > namenode are right because the ZKFC monit= ors it, but what does the
>>>>>>> > third
>>>>>>> > daemon?
>>>>>>> >
>>>>>>> > as I read from this url:
>>>>>>> >
>>>>>>> > https://ccp.cloudera.com/display/CDH4DOC/Software+Configuration+for+Quor= um-based+Storage#SoftwareConfigurationforQuorum-basedStorage-AutomaticFailo= verConfiguration
>>>>>>> >
>>>>>>> > this daemons are only related with NameNo= des, (Health monitoring -
>>>>>>> > the ZKFC
>>>>>>> > pings its local NameNode on a periodic ba= sis with a health-check
>>>>>>> > command.)
>>>>>>> > so what does the third ZKFC? I used the j= obtracker node but I could
>>>>>>> > use
>>>>>>> > another node without any daemon on it...<= br> >>>>>>> >
>>>>>>> > Thanks in advance,
>>>>>>> >
>>>>>>> > ESGLInux,
>>>>>>> >
>>>>>>> >
>>>>>>> >
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Harsh J
>>>>>>
>>>>>>
>>>>
>>




<= /div>--
Harsh J

--e89a8fb1f2aaf9804c04d350ef6a--