hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ayon Sinha <ayonsi...@yahoo.com>
Subject Re: Question regarding datanode been wiped by hadoop
Date Tue, 12 Apr 2011 14:52:11 GMT
The datanode used the dfs config xml file to tell the datanode process, what 
disks are available for storage. Can you check that the config xml has all the 
partitions mentioned and has not been overwritten during the restore process?
See My Photos on Flickr
Also check out my Blog for answers to commonly asked questions.

From: felix gao <gre1600@gmail.com>
To: hdfs-user@hadoop.apache.org
Sent: Tue, April 12, 2011 7:46:31 AM
Subject: Question regarding datanode been wiped by hadoop

What reason/condition would cause a datanode’s blocks to be removed?   Our 
cluster had a one of its datanodes crash because of bad RAM.   After the system 
was upgraded and the datanode/tasktracker brought online the next day we noticed 
the amount of space utilized was minimal and the cluster was rebalancing blocks 
to the datanode.   It would seem the prior blocks were removed.   Was this 
because the datanode was declared dead?   What is the criteria for a namenode to 
decide (Assuming its the namenode) when a datanode should remove prior blocks?  
View raw message