hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4540) An invalidated block should be removed from the blockMap
Date Fri, 31 Oct 2008 20:29:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12644425#action_12644425
] 

dhruba borthakur commented on HADOOP-4540:
------------------------------------------

I agree with Hairong's proposal that when the NN schedules a block to be deleted, it should
delete it from the blocksMap. I have always wondered why the current code does was written
to not delete the block immediately.

> An invalidated block should be removed from the blockMap
> --------------------------------------------------------
>
>                 Key: HADOOP-4540
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4540
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.0
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.18.3
>
>
> Currently when a namenode schedules to delete an over-replicated block, the replica to
be deleted does not get removed the block map immediately. Instead it gets removed when the
next block report to comes in. This causes three problems: 
> 1. getBlockLocations may return locations that do not contain the block;
> 2. Over-replication due to unsuccessful deletion can not be detected as described in
HADOOP-4477.
> 3. The number of blocks shown on dfs Web UI does not get updated on a source node when
a large number of blocks have been moved from the source node to a target node, for example,
when running a balancer.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message