hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Panshul Whisper <ouchwhis...@gmail.com>
Subject Re: hadoop namenode recovery
Date Tue, 15 Jan 2013 03:04:24 GMT
thank you for the reply.

Is there a way with which I can configure my cluster to switch to the
Secondary Name Node automatically in case of the Primary Name Node failure?
When I run my current Hadoop, I see the primary and secondary both Name
nodes running. I was wondering what is that Secondary Name Node for? and
where is it configured?
I was also wondering, is it possible to have two or more Name nodes running
in the same cluster?


On Mon, Jan 14, 2013 at 6:50 PM, <bejoy.hadoop@gmail.com> wrote:

> **
> Hi Panshul,
> Usually for reliability there will be multiple dfs.name.dir configured. Of
> which one would be a remote location such as a nfs mount.
> So that even if the NN machine crashes on a whole you still have the fs
> image and edit log in nfs mount. This can be utilized for reconstructing
> the NN back again.
> Regards
> Bejoy KS
> Sent from remote device, Please excuse typos
> ------------------------------
> *From: * Panshul Whisper <ouchwhisper@gmail.com>
> *Date: *Mon, 14 Jan 2013 17:25:08 -0800
> *To: *<user@hadoop.apache.org>
> *ReplyTo: * user@hadoop.apache.org
> *Subject: *hadoop namenode recovery
> Hello,
> Is there a standard way to prevent the failure of Namenode crash in a
> Hadoop cluster?
> or what is the standard or best practice for overcoming the Single point
> failure problem of Hadoop.
> I am not ready to take chances on a production server with Hadoop 2.0
> Alpha release, which claims to have solved the problem. Are there any other
> things I can do to either prevent the failure or recover from the failure
> in a very short time.
> Thanking You,
> --
> Regards,
> Ouch Whisper
> 010101010101

Ouch Whisper

View raw message