hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From shashwat shriparv <dwivedishash...@gmail.com>
Subject Re: NameNode failure and recovery!
Date Wed, 03 Apr 2013 18:49:23 GMT
If you are not in position to go for HA just keep your checkpoint period
shorter to have recent data recoverable from SNN.

and you always have a option
hadoop namenode -recover
try this on testing cluster and get versed to it.

and take backup of image at some solid state storage.



∞
Shashwat Shriparv



On Wed, Apr 3, 2013 at 9:56 PM, Harsh J <harsh@cloudera.com> wrote:

> There is a 3rd, most excellent way: Use HDFS's own HA, see
>
> http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html
> :)
>
> On Wed, Apr 3, 2013 at 8:10 PM, Rahul Bhattacharjee
> <rahul.rec.dgp@gmail.com> wrote:
> > Hi all,
> >
> > I was reading about Hadoop and got to know that there are two ways to
> > protect against the name node failures.
> >
> > 1) To write to a nfs mount along with the usual local disk.
> >  -or-
> > 2) Use secondary name node. In case of failure of NN , the SNN can take
> in
> > charge.
> >
> > My questions :-
> >
> > 1) SNN is always lagging , so when SNN becomes primary in event of a NN
> > failure ,  then the edits which have not been merged into the image file
> > would be lost , so the system of SNN would not be consistent with the NN
> > before its failure.
> >
> > 2) Also I have read that other purpose of SNN is to periodically merge
> the
> > edit logs with the image file. In case a setup goes with option #1
> (writing
> > to NFS, no SNN) , then who does this merging.
> >
> > Thanks,
> > Rahul
> >
> >
>
>
>
> --
> Harsh J
>

Mime
View raw message