Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C9E60E170 for ; Tue, 15 Jan 2013 09:56:51 +0000 (UTC) Received: (qmail 71241 invoked by uid 500); 15 Jan 2013 09:56:46 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 71092 invoked by uid 500); 15 Jan 2013 09:56:46 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 71080 invoked by uid 99); 15 Jan 2013 09:56:46 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 15 Jan 2013 09:56:46 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=HTML_FONT_FACE_BAD,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of harsh@cloudera.com designates 209.85.223.176 as permitted sender) Received: from [209.85.223.176] (HELO mail-ie0-f176.google.com) (209.85.223.176) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 15 Jan 2013 09:56:40 +0000 Received: by mail-ie0-f176.google.com with SMTP id 13so6893080iea.21 for ; Tue, 15 Jan 2013 01:56:19 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type:x-gm-message-state; bh=PXq+zaifbUgx8hfJ0e0/75EA2NOH8KHCwzU0SMLn5b4=; b=bI0PUiVEOc8pBQYeOW14wNOgXGmbUldSYgdInli4IVA6soooPfZvWau133CFBERvBV ZnKIfc+Gm2VchdfhpaTZVlVIyRAOmIAVdKUfncBN63M9Azj2jeQRYaJ9ErFC07kk2L+a cPoRZYhsSrFpx/BTdQsbROtLFgt94Xt4ObmogLIzaMKhy1dMyXY3wS4/ztnRJFhuHb5F 3Pm8lemloYih9p5U0nOv6V082BxY/fjXxG7w8t6PmrpAVvH1myoHnEcoDNO3S33mSo+w xxMf+kBwTbYeBKTx5GGdzs1It6hNPxEofufzqCpAL5HhVEYM5uC0vYmH3s/fgtGNWebR xCDg== Received: by 10.50.151.238 with SMTP id ut14mr1181030igb.58.1358243778928; Tue, 15 Jan 2013 01:56:18 -0800 (PST) MIME-Version: 1.0 Received: by 10.64.32.166 with HTTP; Tue, 15 Jan 2013 01:55:58 -0800 (PST) In-Reply-To: References: From: Harsh J Date: Tue, 15 Jan 2013 15:25:58 +0530 Message-ID: Subject: Re: question about ZKFC daemon To: "" Content-Type: multipart/alternative; boundary=e89a8f3b9f7df61e2f04d350c573 X-Gm-Message-State: ALoCoQkLRliA0jZ7h6LkaLmMtCNjPGlxoaRPpbr0RtOAl5fCAKTPQ+h46jxUZ6mpr6EZoOw+B0Qh X-Virus-Checked: Checked by ClamAV on apache.org --e89a8f3b9f7df61e2f04d350c573 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi, I fail to see your confusion. ZKFC !=3D ZK ZK is a quorum software, like QJM is. The ZK peers are to be run odd in numbers, such as JNs are to be. ZKFC is something the NN needs for its Automatic Failover capability. It is a client to ZK and thereby demands ZK's presence; for which the odd # of nodes is suggested. ZKFC itself is only to be run one per NN. On Tue, Jan 15, 2013 at 3:23 PM, ESGLinux wrote: > Hi all, > > I=B4m only testing the new HA feature. I=B4m not in a production system, > > Well, let=B4s talk about the number of nodes and the ZKFC daemons. > > In this url: > > https://ccp.cloudera.com/display/CDH4DOC/HDFS+High+Availability+Initial+D= eployment#HDFSHighAvailabilityInitialDeployment-DeployingAutomaticFailover > > you can read: > If you have configured automatic failover using the ZooKeeper > FailoverController (ZKFC), you must install and start thezkfc daemon on > each of the machines that runs a NameNode. > > So, the number of ZKFC daemons are two, but reading this url: > > > http://archive.cloudera.com/cdh4/cdh/4/hadoop/hadoop-yarn/hadoop-yarn-sit= e/HDFSHighAvailabilityWithQJM.html#Deploying_ZooKeeper > > you can read this: > In a typical deployment, ZooKeeper daemons are configured to run on three > or five nodes > > I think that to ensure a good HA enviroment (of any kind) you need and od= d > number of nodes to avoid split-brain. The problem I see here is that If > ZKFC monitors NameNodes in a CDH4 enviroment you only have 2 NN > (active+standby). > > So I=B4m a bit confussed with this deployment... > > Any suggestion? > > Thanks in advance for all your answers > > Kind regards, > > ESGLinux > > > > > 2013/1/14 Colin McCabe > >> On Mon, Jan 14, 2013 at 11:49 AM, Colin McCabe >> wrote: >> > Hi ESGLinux, >> > >> > In production, you need to run QJM on at least 3 nodes. You also need >> > to run ZKFC on at least 3 nodes. You can run them on the same nodes >> > if you like, though. >> >> Er, this should read "You also need to run ZooKeeper on at least 3 >> nodes." ZKFC, which talks to ZooKeeper, runs on only two nodes-- the >> active NN node and the standby NN node. >> >> Colin >> >> > >> > Of course, none of this is "needed" to set up an example cluster. If >> > you just want to try something out, you can run everything on the same >> > node if you want. It depends on what you're trying to do. >> > >> > cheers, >> > Colin >> > >> > >> > On Fri, Dec 28, 2012 at 3:02 AM, ESGLinux wrote: >> >> Thank you for your answer Craig, >> >> >> >> I=B4m planning my cluster and for now I=B4m not sure how many machine= s I >> need;-) >> >> >> >> If I have doubt i=B4ll what clouder say and If have a problem I have >> where to >> >> ask for explications :-) >> >> >> >> ESGLinux >> >> >> >> >> >> >> >> 2012/12/28 Craig Munro >> >>> >> >>> OK, I have reliable storage on my datanodes so not an issue for me. >> If >> >>> that's what Cloudera recommends then I'm sure it's fine. >> >>> >> >>> On Dec 28, 2012 10:38 AM, "ESGLinux" wrote: >> >>>> >> >>>> Hi Craig, >> >>>> >> >>>> I=B4m a bit confused, I have read this from cloudera: >> >>>> >> https://ccp.cloudera.com/display/CDH4DOC/Hardware+Configuration+for+Quor= um-based+Storage >> >>>> >> >>>> The JournalNode daemon is relatively lightweight, so these daemons >> can >> >>>> reasonably be collocated on machines with other Hadoop daemons, for >> example >> >>>> NameNodes, the JobTracker, or the YARN ResourceManager. >> >>>> Cloudera recommends that you deploy the JournalNode daemons on the >> >>>> "master" host or hosts (NameNode, Standby NameNode, JobTracker, >> etc.) so the >> >>>> JournalNodes' local directories can use the reliable local storage >> on those >> >>>> machines. >> >>>> There must be at least three JournalNode daemons, since edit log >> >>>> modifications must be written to a majority of JournalNodes >> >>>> >> >>>> as you can read they recommend to put journalnode daemons with the >> >>>> namenodes, but you say the opposite.??=BF?=BF?? >> >>>> >> >>>> >> >>>> Thanks for your answer, >> >>>> >> >>>> ESGLinux, >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> 2012/12/28 Craig Munro >> >>>>> >> >>>>> You need the following: >> >>>>> >> >>>>> - active namenode + zkfc >> >>>>> - standby namenode + zkfc >> >>>>> - pool of journal nodes (odd number, 3 or more) >> >>>>> - pool of zookeeper nodes (odd number, 3 or more) >> >>>>> >> >>>>> As the journal nodes hold the namesystem transactions they should >> not be >> >>>>> co-located with the namenodes in case of failure. I distribute th= e >> journal >> >>>>> and zookeeper nodes across the hosts running datanodes or as Harsh >> says you >> >>>>> could co-locate them on dedicated hosts. >> >>>>> >> >>>>> ZKFC does not monitor the JobTracker. >> >>>>> >> >>>>> Regards, >> >>>>> Craig >> >>>>> >> >>>>> On Dec 28, 2012 9:25 AM, "ESGLinux" wrote: >> >>>>>> >> >>>>>> Hi, >> >>>>>> >> >>>>>> well, If I have understand you I can configure my NN HA cluster >> this >> >>>>>> way: >> >>>>>> >> >>>>>> - Active NameNode + 1 ZKFC daemon + Journal Node >> >>>>>> - Standby NameNode + 1 ZKFC daemon + Journal Node >> >>>>>> - JobTracker node + 1 ZKFC daemon + Journal Node, >> >>>>>> >> >>>>>> Is this right? >> >>>>>> >> >>>>>> Thanks in advance, >> >>>>>> >> >>>>>> ESGLinux, >> >>>>>> >> >>>>>> 2012/12/27 Harsh J >> >>>>>>> >> >>>>>>> Hi, >> >>>>>>> >> >>>>>>> There are two different things here: Automatic Failover and Quor= um >> >>>>>>> Journal Manager. The former, used via a ZooKeeper Failover >> Controller, >> >>>>>>> is to manage failovers automatically (based on health checks of >> NNs). >> >>>>>>> The latter, used via a set of Journal Nodes, is a medium of shar= ed >> >>>>>>> storage for namesystem transactions that helps enable HA. >> >>>>>>> >> >>>>>>> In a typical deployment, you want 3 or more (odd) JournalNodes f= or >> >>>>>>> reliable HA, preferably on nodes of their own if possible (like >> you >> >>>>>>> would for typical ZooKeepers, and you may co-locate with those a= s >> >>>>>>> well) and one ZKFC for each NameNode (connected to the same ZK >> >>>>>>> quorum). >> >>>>>>> >> >>>>>>> On Thu, Dec 27, 2012 at 5:33 PM, ESGLinux >> wrote: >> >>>>>>> > Hi all, >> >>>>>>> > >> >>>>>>> > I have a doubt about how to deploy the Zookeeper in a NN HA >> >>>>>>> > cluster, >> >>>>>>> > >> >>>>>>> > As far as I know, I need at least three nodes to run three >> ZooKeeper >> >>>>>>> > FailOver Controller (ZKFC). I plan to put these 3 daemons this >> way: >> >>>>>>> > >> >>>>>>> > - Active NameNode + 1 ZKFC daemon >> >>>>>>> > - Standby NameNode + 1 ZKFC daemon >> >>>>>>> > - JobTracker node + 1 ZKFC daemon, (is this right?) >> >>>>>>> > >> >>>>>>> > so the quorum is formed with these three nodes. The nodes that >> runs >> >>>>>>> > a >> >>>>>>> > namenode are right because the ZKFC monitors it, but what does >> the >> >>>>>>> > third >> >>>>>>> > daemon? >> >>>>>>> > >> >>>>>>> > as I read from this url: >> >>>>>>> > >> >>>>>>> > >> https://ccp.cloudera.com/display/CDH4DOC/Software+Configuration+for+Quor= um-based+Storage#SoftwareConfigurationforQuorum-basedStorage-AutomaticFailo= verConfiguration >> >>>>>>> > >> >>>>>>> > this daemons are only related with NameNodes, (Health >> monitoring - >> >>>>>>> > the ZKFC >> >>>>>>> > pings its local NameNode on a periodic basis with a health-che= ck >> >>>>>>> > command.) >> >>>>>>> > so what does the third ZKFC? I used the jobtracker node but I >> could >> >>>>>>> > use >> >>>>>>> > another node without any daemon on it... >> >>>>>>> > >> >>>>>>> > Thanks in advance, >> >>>>>>> > >> >>>>>>> > ESGLInux, >> >>>>>>> > >> >>>>>>> > >> >>>>>>> > >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> -- >> >>>>>>> Harsh J >> >>>>>> >> >>>>>> >> >>>> >> >> >> > > --=20 Harsh J --e89a8f3b9f7df61e2f04d350c573 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hi,

I fail to see your confusion.
=

ZKFC !=3D ZK

ZK is a quo= rum software, like QJM is. The ZK peers are to be run odd in numbers, such = as JNs are to be.

ZKFC is something the NN needs for its Auto= matic Failover capability. It is a client to ZK and thereby demands ZK'= s presence; for which the odd # of nodes is suggested. ZKFC itself is only = to be run one per NN.


On Tue,= Jan 15, 2013 at 3:23 PM, ESGLinux <esggrupos@gmail.com> w= rote:
Hi all,=A0

I=B4m only tes= ting the new HA feature. I=B4m not in a production system,

Well, let=B4s talk about the number of nodes and the ZKFC daemons.=A0
=

In this url:

you can read:
If you have configured automatic failover using t= he ZooKeeper FailoverController (ZKFC), you must install and start thez= kfc=A0daemon on=A0
each of the m= achines that runs a NameNode.

= So, the number of ZKFC daemons are two, but reading this url:


=
you can read this:
In a typical deployment,= ZooKeeper daemons are configured to run on three or five nodes

=
I think that to ensure a= good HA enviroment (of any kind) you need and odd number of nodes to avoid= split-brain. The problem I see here is that If ZKFC monitors NameNodes in = a CDH4 enviroment you only have 2 NN (active+standby).=A0

=
So I=B4m a bit confussed= with this deployment...

=
Any suggestion?

=
Thanks in advance for al= l your answers

=
Kind regards,=A0<= /font>

=
ESGLinux




2013/1/14 Colin McCabe <cmccabe@alumni.cmu.edu>
On Mon, Jan 14, 2013 at 11:49 AM, Colin McCabe <cmccabe@alumni.cmu.edu> wro= te:
> Hi ESGLinux,
>
> In production, you need to run QJM on at least 3 nodes. =A0You also ne= ed
> to run ZKFC on at least 3 nodes. =A0You can run them on the same nodes=
> if you like, though.

Er, this should read "You also need to run ZooKeeper on at least= 3
nodes." =A0ZKFC, which talks to ZooKeeper, runs on only two nodes-- th= e
active NN node and the standby NN node.

Colin

>
> Of course, none of this is "needed" to set up an example clu= ster. =A0If
> you just want to try something out, you can run everything on the same=
> node if you want. =A0It depends on what you're trying to do.
>
> cheers,
> Colin
>
>
> On Fri, Dec 28, 2012 at 3:02 AM, ESGLinux <esggrupos@gmail.com> wrote:
>> Thank you for your answer Craig,
>>
>> I=B4m planning my cluster and for now I=B4m not sure how many mach= ines I need;-)
>>
>> If I have doubt i=B4ll what clouder say and If have a problem I ha= ve where to
>> ask for explications :-)
>>
>> ESGLinux
>>
>>
>>
>> 2012/12/28 Craig Munro <craig.munro@gmail.com>
>>>
>>> OK, I have reliable storage on my datanodes so not an issue fo= r me. =A0If
>>> that's what Cloudera recommends then I'm sure it's= fine.
>>>
>>> On Dec 28, 2012 10:38 AM, "ESGLinux" <esggrupos@gmail.com> w= rote:
>>>>
>>>> Hi Craig,
>>>>
>>>> I=B4m a bit confused, I have read this from cloudera:
>>>> https://ccp.cl= oudera.com/display/CDH4DOC/Hardware+Configuration+for+Quorum-based+Storage<= /a>
>>>>
>>>> The JournalNode daemon is relatively lightweight, so these= daemons can
>>>> reasonably be collocated on machines with other Hadoop dae= mons, for example
>>>> NameNodes, the JobTracker, or the YARN ResourceManager. >>>> Cloudera recommends that you deploy the JournalNode daemon= s on the
>>>> "master" host or hosts (NameNode, Standby NameNo= de, JobTracker, etc.) so the
>>>> JournalNodes' local directories can use the reliable l= ocal storage on those
>>>> machines.
>>>> There must be at least three JournalNode daemons, since ed= it log
>>>> modifications must be written to a majority of JournalNode= s
>>>>
>>>> as you can read they recommend to put journalnode daemons = with the
>>>> namenodes, but you say the opposite.??=BF?=BF??
>>>>
>>>>
>>>> Thanks for your answer,
>>>>
>>>> ESGLinux,
>>>>
>>>>
>>>>
>>>>
>>>> 2012/12/28 Craig Munro <
craig.munro@gmail.com>
>>>>>
>>>>> You need the following:
>>>>>
>>>>> - active namenode + zkfc
>>>>> - standby namenode + zkfc
>>>>> - pool of journal nodes (odd number, 3 or more)
>>>>> - pool of zookeeper nodes (odd number, 3 or more)
>>>>>
>>>>> As the journal nodes hold the namesystem transactions = they should not be
>>>>> co-located with the namenodes in case of failure. =A0I= distribute the journal
>>>>> and zookeeper nodes across the hosts running datanodes= or as Harsh says you
>>>>> could co-locate them on dedicated hosts.
>>>>>
>>>>> ZKFC does not monitor the JobTracker.
>>>>>
>>>>> Regards,
>>>>> Craig
>>>>>
>>>>> On Dec 28, 2012 9:25 AM, "ESGLinux" <esggrupos@gmail.com> wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> well, If I have understand you I can configure my = NN HA cluster this
>>>>>> way:
>>>>>>
>>>>>> - Active NameNode + 1 ZKFC daemon + Journal Node >>>>>> - Standby NameNode + 1 ZKFC daemon + Journal Node<= br> >>>>>> - JobTracker node + 1 ZKFC daemon + Journal Node,<= br> >>>>>>
>>>>>> Is this right?
>>>>>>
>>>>>> Thanks in advance,
>>>>>>
>>>>>> ESGLinux,
>>>>>>
>>>>>> 2012/12/27 Harsh J <
harsh@cloudera.com>
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> There are two different things here: Automatic= Failover and Quorum
>>>>>>> Journal Manager. The former, used via a ZooKee= per Failover Controller,
>>>>>>> is to manage failovers automatically (based on= health checks of NNs).
>>>>>>> The latter, used via a set of Journal Nodes, i= s a medium of shared
>>>>>>> storage for namesystem transactions that helps= enable HA.
>>>>>>>
>>>>>>> In a typical deployment, you want 3 or more (o= dd) JournalNodes for
>>>>>>> reliable HA, preferably on nodes of their own = if possible (like you
>>>>>>> would for typical ZooKeepers, and you may co-l= ocate with those as
>>>>>>> well) and one ZKFC for each NameNode (connecte= d to the same ZK
>>>>>>> quorum).
>>>>>>>
>>>>>>> On Thu, Dec 27, 2012 at 5:33 PM, ESGLinux <= esggrupos@gmail.co= m> wrote:
>>>>>>> > Hi all,
>>>>>>> >
>>>>>>> > I have a doubt about how to deploy the Zo= okeeper in a NN HA
>>>>>>> > cluster,
>>>>>>> >
>>>>>>> > As far as I know, I need at least three n= odes to run three ZooKeeper
>>>>>>> > FailOver Controller (ZKFC). I plan to put= these 3 daemons this way:
>>>>>>> >
>>>>>>> > - Active NameNode + 1 ZKFC daemon
>>>>>>> > - Standby NameNode + 1 ZKFC daemon
>>>>>>> > - JobTracker node + 1 ZKFC daemon, (is th= is right?)
>>>>>>> >
>>>>>>> > so the quorum is formed with these three = nodes. The nodes that runs
>>>>>>> > a
>>>>>>> > namenode are right because the ZKFC monit= ors it, but what does the
>>>>>>> > third
>>>>>>> > daemon?
>>>>>>> >
>>>>>>> > as I read from this url:
>>>>>>> >
>>>>>>> > https://ccp.cloudera.com/display/CDH4DOC/Software+Configuration+for+Quor= um-based+Storage#SoftwareConfigurationforQuorum-basedStorage-AutomaticFailo= verConfiguration
>>>>>>> >
>>>>>>> > this daemons are only related with NameNo= des, (Health monitoring -
>>>>>>> > the ZKFC
>>>>>>> > pings its local NameNode on a periodic ba= sis with a health-check
>>>>>>> > command.)
>>>>>>> > so what does the third ZKFC? I used the j= obtracker node but I could
>>>>>>> > use
>>>>>>> > another node without any daemon on it...<= br> >>>>>>> >
>>>>>>> > Thanks in advance,
>>>>>>> >
>>>>>>> > ESGLInux,
>>>>>>> >
>>>>>>> >
>>>>>>> >
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Harsh J
>>>>>>
>>>>>>
>>>>
>>




--
= Harsh J --e89a8f3b9f7df61e2f04d350c573--