hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhe Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7211) Block invalidation work should be ordered
Date Wed, 08 Oct 2014 17:33:33 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14163827#comment-14163827
] 

Zhe Zhang commented on HDFS-7211:
---------------------------------

[~tlipcon] We just encountered an issue where a DN is being decommissioned with a large number
of blocks (10,000s). Those block invalidation requests are inserted into the unordered {{node2blocks}}.
Therefore some blocks are "flushed around" in the set and are not invalidated for a long time.
Are you aware of an ordered (and lightweight) collection type that we can use? 

> Block invalidation work should be ordered
> -----------------------------------------
>
>                 Key: HDFS-7211
>                 URL: https://issues.apache.org/jira/browse/HDFS-7211
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Zhe Zhang
>            Assignee: Zhe Zhang
>
> {{InvalidateBlocks#node2blocks}} uses an unordered {{LightWeightHashSet}} to store blocks
(to be invalidated) on the same DN. This causes poor ordering when a single DN has a large
number of blocks to invalidate. Blocks should be invalidated following the order of invalidation
commands -- at least roughly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message