hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-1135) A block report processing may incorrectly cause the namenode to delete blocks
Date Tue, 20 Mar 2007 18:50:32 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

dhruba borthakur updated HADOOP-1135:
-------------------------------------

    Attachment: blockReportInvalidateBlock2.patch

Code uploaded for code review. Unit test coming soon.

> A block report processing may incorrectly cause the namenode to delete blocks 
> ------------------------------------------------------------------------------
>
>                 Key: HADOOP-1135
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1135
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: dhruba borthakur
>         Assigned To: dhruba borthakur
>         Attachments: blockReportInvalidateBlock2.patch
>
>
> When a block report arrives at the namenode, the namenode goes through all the blocks
on that datanode. If a block is not valid it is marked for deletion. The blocks-to-be-deleted
are sent to the datanode as a response to the next heartbeat RPC. The namenode sends only
100 blocks-to-be-deleted at a time. This was introduced as part of hadoop-994. The bug is
that if the number of blocks-to-be-deleted exceeds 100, then that namenode marks all the remaining
blocks in the block report for deletion.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message