hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4937) ReplicationMonitor can infinite-loop in BlockPlacementPolicyDefault#chooseRandom()
Date Mon, 01 Jul 2013 16:24:20 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13696922#comment-13696922

Kihwal Lee commented on HDFS-4937:

bq. Even then it was not able choose at least from them?

It couldn't pick enough number of nodes because the max replicas/rack was already calculated.
I think it worked fine for majority of blocks with 3 replicas since the cluster had more than
3 racks even after refresh. The issue was with blocks with many more replicas. But picking
enough nodes is just one condition. The other is for checking the exhaustion of candidate
nodes. It would have bailed out of the while loop, if the cached cluster size was updated
inside the loop.

To avoid frequent cluster-size refresh for this rare condition, we can make it update the
cached value after {{dfs.replication.max}} iterations, within which most blocks should find
all they need. If NN hits this issue, it will loop {{dfs.replication.max}} times and break
out. I prefer this over adding locking, which will slow down normal cases.

> ReplicationMonitor can infinite-loop in BlockPlacementPolicyDefault#chooseRandom()
> ----------------------------------------------------------------------------------
>                 Key: HDFS-4937
>                 URL: https://issues.apache.org/jira/browse/HDFS-4937
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.0.4-alpha, 0.23.8
>            Reporter: Kihwal Lee
> When a large number of nodes are removed by refreshing node lists, the network topology
is updated. If the refresh happens at the right moment, the replication monitor thread may
stuck in the while loop of {{chooseRandom()}}. This is because the cached cluster size is
used in the terminal condition check of the loop. This usually happens when a block with a
high replication factor is being processed. Since replicas/rack is also calculated beforehand,
no node choice may satisfy the goodness criteria if refreshing removed racks. 
> All nodes will end up in the excluded list, but the size will still be less than the
cached cluster size, so it will loop infinitely. This was observed in a production environment.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message