hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-4540) An invalidated block should be removed from the blockMap
Date Thu, 30 Oct 2008 21:20:49 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-4540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Hairong Kuang updated HADOOP-4540:

             Priority: Blocker  (was: Major)
    Affects Version/s:     (was: 0.18.0)
        Fix Version/s: 0.18.2
             Assignee: Hairong Kuang

This bug may cause block lose if a datanode containing this block frequently gets heartbeat
lost and re-registers itself in a short period. Thus marking it as a blocker.

> An invalidated block should be removed from the blockMap
> --------------------------------------------------------
>                 Key: HADOOP-4540
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4540
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.0
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.18.2
> Currently when a namenode schedules to delete an over-replicated block, the replica to
be deleted does not get removed the block map immediately. Instead it gets removed when the
next block report to comes in. This causes three problems: 
> 1. getBlockLocations may return locations that do not contain the block;
> 2. Over-replication due to unsuccessful deletion can not be detected as described in
> 3. The number of blocks shown on dfs Web UI does not get updated on a source node when
a large number of blocks have been moved from the source node to a target node, for example,
when running a balancer.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message