hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1135) A block report processing may incorrectly cause the namenode to delete blocks
Date Wed, 21 Mar 2007 23:16:33 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12482955
] 

dhruba borthakur commented on HADOOP-1135:
------------------------------------------

I agree that this could cause data loss. 

> A block report processing may incorrectly cause the namenode to delete blocks 
> ------------------------------------------------------------------------------
>
>                 Key: HADOOP-1135
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1135
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: dhruba borthakur
>         Assigned To: dhruba borthakur
>         Attachments: blockReportInvalidateBlock2.patch
>
>
> When a block report arrives at the namenode, the namenode goes through all the blocks
on that datanode. If a block is not valid it is marked for deletion. The blocks-to-be-deleted
are sent to the datanode as a response to the next heartbeat RPC. The namenode sends only
100 blocks-to-be-deleted at a time. This was introduced as part of hadoop-994. The bug is
that if the number of blocks-to-be-deleted exceeds 100, then that namenode marks all the remaining
blocks in the block report for deletion.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message