hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jing Zhao (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5579) Under construction files make DataNode decommission take very long hours
Date Mon, 13 Jan 2014 22:39:53 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13870079#comment-13870079
] 

Jing Zhao commented on HDFS-5579:
---------------------------------

The javadoc warning and TestSafeMode failure should be unrelated. I will commit the patch
shortly.

> Under construction files make DataNode decommission take very long hours
> ------------------------------------------------------------------------
>
>                 Key: HDFS-5579
>                 URL: https://issues.apache.org/jira/browse/HDFS-5579
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 1.2.0, 2.2.0
>            Reporter: zhaoyunjiong
>            Assignee: zhaoyunjiong
>         Attachments: HDFS-5579-branch-1.2.patch, HDFS-5579.patch
>
>
> We noticed that some times decommission DataNodes takes very long time, even exceeds
100 hours.
> After check the code, I found that in BlockManager:computeReplicationWorkForBlocks(List<List<Block>>
blocksToReplicate) it won't replicate blocks which belongs to under construction files, however
in BlockManager:isReplicationInProgress(DatanodeDescriptor srcNode), if there  is block need
replicate no matter whether it belongs to under construction or not, the decommission progress
will continue running.
> That's the reason some time the decommission takes very long time.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message