hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alex Loddengaard" <a...@cloudera.com>
Subject Re: Best way to handle namespace host failures
Date Mon, 10 Nov 2008 18:36:43 GMT
There has been a lot of discussion on this list about handling namenode
failover.  Generally the most common approach is to backup the namenode to
an NFS mount and manually instantiate a new namenode when your current
namenode fails.
As Hadoop exists today, the namenode is a single point of failure.

Alex

On Mon, Nov 10, 2008 at 3:12 AM, Goel, Ankur <ankur.goel@corp.aol.com>wrote:

> Thanks for the replies folks. We are not seeing this frequently but we
> just want to avoid single point of failure and keep the manual
> intervention to the min. or at best none. This is to ensure that system
> runs smoothly in production without abrupt failures.
>
> Thanks
> -Ankur
>
> -----Original Message-----
> From: Amar Kamat [mailto:amarrk@yahoo-inc.com]
> Sent: Monday, November 10, 2008 3:53 PM
> To: core-user@hadoop.apache.org
> Subject: Re: Best way to handle namespace host failures
>
> Goel, Ankur wrote:
> > Hi Folks,
> >
> >              I am looking for some advice on some the ways /
> techniques
> > that people are using to get around namenode failures (Both disk and
> > host).
> >
> > We have a small cluster with several job scheduled for periodic
> > execution on the same host where name server runs. What we would like
> to
> > have is an automatic failover mechanism in hadoop so that a secondary
> > namenode automatically takes the roll of a master.
> >
> Are you seeing this frequently? If yes then you should find out why its
> happening. As far as I know namenode failure is not expected to be
> frequent.
> Amar
> >
> >
> > I can move this discussion to a JIRA if people are interested.
> >
> >
> >
> > Thanks
> >
> > -Ankur
> >
> >
> >
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message