hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Ryan (JIRA)" <j...@apache.org>
Subject [jira] Created: (HDFS-1282) namenode should reject datanodes which send impossible block reports
Date Tue, 06 Jul 2010 22:13:49 GMT
namenode should reject datanodes which send impossible block reports
--------------------------------------------------------------------

                 Key: HDFS-1282
                 URL: https://issues.apache.org/jira/browse/HDFS-1282
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: data-node, name-node
    Affects Versions: 0.20.1
            Reporter: Andrew Ryan


Over the past few weeks we've had several datanodes with bad disks that suffer ext3 corruption,
and consequently start reporting impossible values for how full they are. This particular
node, for example, has a configured capacity of 10.86TB but reports 1733.95TB used, for a
total of 15973.57% utilization.

Node 	 Last Contact 	Admin State 	Configured Capacity (TB) 	Used (TB) 	Non DFS  Used (TB)
	Remaining  (TB) 	Used  (%) 	Used  (%) 	Remaining (%) 	Blocks 
hadoop2254	 44	In Service 	10.86	1733.95	0	5.24	15973.57   48.25	65602 

If we can avoid generating such bogus data on the datanode that would be great.  But if the
namenode receives such an impossible block report, it should definitely consider that datanode
to be not trustworthy, and in my opinion, make it dead.

The "fix" in our case was either to fsck or replace the bad disk.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message