hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From anil gupta <anilgupt...@gmail.com>
Subject Re: Hadoop hardware failure recovery
Date Fri, 10 Aug 2012 19:12:04 GMT
Hi Aji,

Adding onto whatever Mohammad Tariq said, If you use Hadoop 2.0.0-Alpha
then Namenode is not a single point of failure.However, Hadoop 2.0.0 is not
of production quality yet(its in Alpha).
Namenode use to be a Single Point of Failure in releases prior to Hadoop
2.0.0.

HTH,
Anil Gupta

On Fri, Aug 10, 2012 at 11:55 AM, Ted Dunning <tdunning@maprtech.com> wrote:

> Hadoop's file system was (mostly) copied from the concepts of Google's old
> file system.
>
> The original paper is probably the best way to learn about that.
>
> http://research.google.com/archive/gfs.html
>
>
>
> On Fri, Aug 10, 2012 at 11:38 AM, Aji Janis <aji1705@gmail.com> wrote:
>
>> I am very new to Hadoop. I am considering setting up a Hadoop cluster
>> consisting of 5 nodes where each node has 3 internal hard drives. I
>> understand HDFS has a configurable redundancy feature but what happens if
>> an entire drive crashes (physically) for whatever reason? How does Hadoop
>> recover, if it can, from this situation? What else should I know before
>> setting up my cluster this way? Thanks in advance.
>>
>>
>>
>


-- 
Thanks & Regards,
Anil Gupta

Mime
View raw message