hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mingliang Liu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9485) Make BlockManager#removeFromExcessReplicateMap accept BlockInfo instead of Block
Date Tue, 01 Dec 2015 04:25:11 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15033048#comment-15033048
] 

Mingliang Liu commented on HDFS-9485:
-------------------------------------

Thanks [~jingzhao] for reviewing this.

Failing tests are unrelated. {{TestDirectoryScanner}} seems flaky.

> Make BlockManager#removeFromExcessReplicateMap accept BlockInfo instead of Block
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-9485
>                 URL: https://issues.apache.org/jira/browse/HDFS-9485
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: Mingliang Liu
>            Assignee: Mingliang Liu
>            Priority: Minor
>         Attachments: HDFS-9485.000.patch
>
>
> The {{BlockManager#removeFromExcessReplicateMap()}} method accepts a {{Block}} which
is to remove from {{excessReplicateMap}}. However the {{excessReplicateMap}} maps a StorageID
to the set of {{BlockInfo}} that are "extra" for the DataNode of the StorageID. Deleting a
sub-class object from a collection provided a base-class object happens to work here.
> Alternatively, we can make the {{removeFromExcessReplicateMap}} accept a {{BlockInfo}}
object. As the current call stack is passing {{BlockInfo}} object mostly, the code change
should be safe.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message