hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-1135) A block report processing may incorrect cause the namenode to delete blocks
Date Tue, 20 Mar 2007 16:52:32 GMT
A block report processing may incorrect cause the namenode to delete blocks 
----------------------------------------------------------------------------

                 Key: HADOOP-1135
                 URL: https://issues.apache.org/jira/browse/HADOOP-1135
             Project: Hadoop
          Issue Type: Bug
          Components: dfs
            Reporter: dhruba borthakur
         Assigned To: dhruba borthakur


When a block report arrives at the namenode, the namenode goes through all the blocks on that
datanode. If a block is not valid it is marked for deletion. The blocks-to-be-deleted are
sent to the datanode as a response to the next heartbeat RPC. The namenode sends only 100
blocks-to-be-deleted at a time. This was introduced as part of hadoop-994. The bug is that
if the number of blocks-to-be-deleted exceeds 100, then that namenode marks all the remaining
blocks in the block report for deletion.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message