hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-1480) All replicas for a block with repl=2 end up in same rack
Date Fri, 24 Jun 2011 03:57:47 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13054231#comment-13054231
] 

Todd Lipcon commented on HDFS-1480:
-----------------------------------

Sorry, I think the above test actually fails because it will sometimes decommission all of
the nodes on one of the test racks.

But, if you bump it up to have 3 nodes in each rack, you'll see the new code path from HDFS-15
get triggered. -- you can see it first re-replicate the block to be all one one host, and
then after it gets the addStoredBlock calls, it notices it's not on enough racks, re-replicates
elsewhere, and eventually the random choice gets it on the right one.

> All replicas for a block with repl=2 end up in same rack
> --------------------------------------------------------
>
>                 Key: HDFS-1480
>                 URL: https://issues.apache.org/jira/browse/HDFS-1480
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.20.2
>            Reporter: T Meyarivan
>         Attachments: hdfs-1480-test.txt
>
>
> It appears that all replicas of a block can end up in the same rack. The likelihood of
such replicas seems to be directly related to decommissioning of nodes. 
> Post rolling OS upgrade (decommission 3-10% of nodes, re-install etc, add them back)
of a running cluster, all replicas of about 0.16% of blocks ended up in the same rack.
> Hadoop Namenode UI etc doesn't seem to know about such incorrectly replicated blocks.
"hadoop fsck .." does report that the blocks must be replicated on additional racks.
> Looking at ReplicationTargetChooser.java, following seem suspect:
> snippet-01:
> {code}
>     int maxNodesPerRack =
>       (totalNumOfReplicas-1)/clusterMap.getNumOfRacks()+2;
> {code}
> snippet-02:
> {code}
>       case 2:
>         if (clusterMap.isOnSameRack(results.get(0), results.get(1))) {
>           chooseRemoteRack(1, results.get(0), excludedNodes,
>                            blocksize, maxNodesPerRack, results);
>         } else if (newBlock){
>           chooseLocalRack(results.get(1), excludedNodes, blocksize,
>                           maxNodesPerRack, results);
>         } else {
>           chooseLocalRack(writer, excludedNodes, blocksize,
>                           maxNodesPerRack, results);
>         }
>         if (--numOfReplicas == 0) {
>           break;
>         }
> {code}
> snippet-03:
> {code}
>     do {
>       DatanodeDescriptor[] selectedNodes =
>         chooseRandom(1, nodes, excludedNodes);
>       if (selectedNodes.length == 0) {
>         throw new NotEnoughReplicasException(
>                                              "Not able to place enough replicas");
>       }
>       result = (DatanodeDescriptor)(selectedNodes[0]);
>     } while(!isGoodTarget(result, blocksize, maxNodesPerRack, results));
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message