hadoop-mapreduce-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tsuyoshi OZAWA <ozawa.tsuyo...@gmail.com>
Subject Re: More than one RM in YARN?
Date Thu, 20 Sep 2012 01:10:57 GMT
Chris,

It's still WIP at this time as Harsh said, though the design of YARN
takes into account ZK failover of RM.

On Thu, Sep 20, 2012 at 6:08 AM, Harsh J <harsh@cloudera.com> wrote:
> Chris,
>
> It isn't. There are a few classes for this but it doesn't work. The RM
> does not use ZK today.
>
> On Thu, Sep 20, 2012 at 2:22 AM, Chris Riccomini
> <criccomini@linkedin.com> wrote:
>> Hey Bikas,
>>
>> One other question: does the RM recover from ZK if I restart it? It seems
>> like, when I bounce an RM, it restarts with a blank slate, even when
>> configured to talk to ZooKeeper. Is this also not implemented yet?
>>
>> Cheers,
>> Chris
>>
>> On 9/19/12 11:04 AM, "Bikas Saha" <bikas@hortonworks.com> wrote:
>>
>>>Sorry about that. There is some code in the RM that's supposed to be used
>>>to save state but its not used. So neither state preserving restart works
>>>nor failover.
>>>
>>>I have been guilty of being tardy on this project but I hope to be working
>>>on the restart piece fairly soon. Please do watch MAPREDUCE-4326 (this
>>>should soon move to the YARN sub-project) for details.
>>>
>>>Bikas
>>>
>>>-----Original Message-----
>>>From: Chris Riccomini [mailto:criccomini@linkedin.com]
>>>Sent: Wednesday, September 19, 2012 10:21 AM
>>>To: mapreduce-dev@hadoop.apache.org
>>>Subject: Re: More than one RM in YARN?
>>>
>>>Hey Bikas,
>>>
>>>Correct.
>>>
>>>Initially, I'd hoped that NMs failed over to appropriate RMs via ZK
>>>notifications. When that didn't seem to be implemented, I hoped that RMs,
>>>at least kept in sync with each other via ZK, so that I could use a round
>>>robin DNS (or HW load balancer) for the RMs. When that didn't seem to be
>>>implemented, I decided to just use DNS and a manual failover.
>>>
>>>Simple is best, I suppose.
>>>
>>>Cheers,
>>>Chris
>>>
>>>On 9/19/12 10:18 AM, "Bikas Saha" <bikas@hortonworks.com> wrote:
>>>
>>>>If I understand you correctly, your target scenario is failover?
>>>>
>>>>Its good to know that you have something that works for you right now.
>>>>
>>>>Bikas
>>>>
>>>>-----Original Message-----
>>>>From: Chris Riccomini [mailto:criccomini@linkedin.com]
>>>>Sent: Wednesday, September 19, 2012 10:06 AM
>>>>To: mapreduce-dev@hadoop.apache.org
>>>>Subject: Re: More than one RM in YARN?
>>>>
>>>>Hey Bikas,
>>>>
>>>>Ideally, I'd like to run them concurrently. What I'm discovering is
>>>>that this is not (currently) possible, so I'm settling for a manual
>>>>fail over using DNS, which should suffice for the time being.
>>>>
>>>>Cheers,
>>>>Chris
>>>>
>>>>On 9/19/12 10:05 AM, "Bikas Saha" <bikas@hortonworks.com> wrote:
>>>>
>>>>>Chris,
>>>>>
>>>>>Could you please elaborate a bit on the use case you are targeting? Do
>>>>>you want to run multiple active RM's in the cluster? And they are
>>>>>being used concurrently? Or are you talking about failover scenarios?
>>>>>
>>>>>Bikas
>>>>>
>>>>>-----Original Message-----
>>>>>From: Harsh J [mailto:harsh@cloudera.com]
>>>>>Sent: Tuesday, September 18, 2012 9:37 PM
>>>>>To: mapreduce-dev@hadoop.apache.org
>>>>>Subject: Re: More than one RM in YARN?
>>>>>
>>>>>Chris,
>>>>>
>>>>>Not yet, it is coming:
>>>>>https://issues.apache.org/jira/browse/MAPREDUCE-4326 should go in
>>>>>first, followed by the HA work after it via
>>>>>https://issues.apache.org/jira/browse/MAPREDUCE-4345 (Latter isn't a
>>>>>dupe of the former exactly).
>>>>>
>>>>>On Wed, Sep 19, 2012 at 6:12 AM, Chris Riccomini
>>>>><criccomini@linkedin.com>
>>>>>wrote:
>>>>>> Hey Guys,
>>>>>>
>>>>>> Is anyone running more than one RM on a YARN cluster (backed by ZK)?
>>>>>We're looking at this, and I want to know if there's a standard way to
>>>>>do it. Are people doing round robin DNS, load balancers, etc? Is it
>>>>>possible to configure node managers to have a list of RMs rather than
>>>>>a
>>>>single one?
>>>>>>
>>>>>> Thanks!
>>>>>> Chris
>>>>>
>>>>>
>>>>>
>>>>>--
>>>>>Harsh J
>>
>
>
>
> --
> Harsh J



-- 
OZAWA Tsuyoshi

Mime
View raw message