hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mohammad Tariq <donta...@gmail.com>
Subject Re: Hadoop hardware failure recovery
Date Fri, 10 Aug 2012 19:16:43 GMT
Very correctly said by Anil. Actually Hadoop HA is not yet production
ready and you are about to begin your Hadoop journey, so just thought
of not mentioning it. If you want to use HA, just pull it from the
trunk and do a build.

Regards,
    Mohammad Tariq


On Sat, Aug 11, 2012 at 12:42 AM, anil gupta <anilgupta84@gmail.com> wrote:
> Hi Aji,
>
> Adding onto whatever Mohammad Tariq said, If you use Hadoop 2.0.0-Alpha then
> Namenode is not a single point of failure.However, Hadoop 2.0.0 is not of
> production quality yet(its in Alpha).
> Namenode use to be a Single Point of Failure in releases prior to Hadoop
> 2.0.0.
>
> HTH,
> Anil Gupta
>
>
> On Fri, Aug 10, 2012 at 11:55 AM, Ted Dunning <tdunning@maprtech.com> wrote:
>>
>> Hadoop's file system was (mostly) copied from the concepts of Google's old
>> file system.
>>
>> The original paper is probably the best way to learn about that.
>>
>> http://research.google.com/archive/gfs.html
>>
>>
>>
>> On Fri, Aug 10, 2012 at 11:38 AM, Aji Janis <aji1705@gmail.com> wrote:
>>>
>>> I am very new to Hadoop. I am considering setting up a Hadoop cluster
>>> consisting of 5 nodes where each node has 3 internal hard drives. I
>>> understand HDFS has a configurable redundancy feature but what happens if an
>>> entire drive crashes (physically) for whatever reason? How does Hadoop
>>> recover, if it can, from this situation? What else should I know before
>>> setting up my cluster this way? Thanks in advance.
>>>
>>>
>>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta

Mime
View raw message