hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "叶双明" <yeshuangm...@gmail.com>
Subject Re: Thinking about retriving DFS metadata from datanodes!!!
Date Wed, 10 Sep 2008 07:06:08 GMT
I think let each block carry three simple additional information which
doesn't use in normal situation:
   1. which file that it belong to
   2. which block is it in the file
   3. how many blocks of the file
After the cluster system has been destroy, we can set up new NameNode , and
then , rebuild metadata from the information reported from datanodes.

And the cost is  a little disk space, indeed less than 1k each block I
think.  I don't think it replace of multiple NameNodes or compare to , but
just a  possible mechanism to recover data, the point is '"recover".

hehe~~ thanks.

2008/9/10 Raghu Angadi <rangadi@yahoo-inc.com>

> The main problem is the complexity of maintaining accuracy of the metadata.
> In other words, what you think is the cost?
> Do you think writing fsimage to multiple places helps with the terrorist
> attack? It is supported even now.
> Raghu.
> 叶双明 wrote:
>> Thanks for paying attention  to my tentative idea!
>> What I thought isn't how to store the meradata, but the final (or last)
>> way
>> to recover valuable data in the cluster when something worst (which
>> destroy
>> the metadata in all multiple NameNode) happen. i.e. terrorist attack  or
>> natural disasters destroy half of cluster nodes within all NameNode, we
>> can
>> recover as much data as possible by this mechanism, and hava big chance to
>> recover entire data of cluster because fo original replication.
>> Any suggestion is appreciate!

Sorry for my english!! 明
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message