hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-1497) Possibility of duplicate blockids if dead-datanodes come back up after corresponding files were deleted
Date Fri, 15 Jun 2007 21:29:26 GMT
Possibility of duplicate blockids if dead-datanodes come back up after corresponding files
were deleted
-------------------------------------------------------------------------------------------------------

                 Key: HADOOP-1497
                 URL: https://issues.apache.org/jira/browse/HADOOP-1497
             Project: Hadoop
          Issue Type: Bug
            Reporter: dhruba borthakur
            Assignee: dhruba borthakur


Suppose a datanode D has a block B that belongs to file F. Suppose the datanode D dies and
the namenode replicates those blocks to other datanodes. No, suppose the user deletes file
F. The namenode removes all the blocks that belonged to file F. Now, suppose a new file F1
is created and the namenode generates the same blockid B for this new file F1. 

Suppose the old datanode D comes back to life. Now we have a valid corrupted block B on datanode
D.

This case is possibly detected by the Client (using CRC). But does HDFS need to handle this
scenario better?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message