hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-1480) All replicas for a block with repl=2 end up in same rack
Date Fri, 24 Jun 2011 02:33:47 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13054217#comment-13054217
] 

Todd Lipcon commented on HDFS-1480:
-----------------------------------

There's definitely a bug still in trunk, here. Here's my analysis (in this case with repl=3):

- Cluster has 2 racks, A and B, each with three nodes.
- Block initially written on A1, B1, B2
- Admin decommissions two of these nodes (let's say A1 and B1 but it doesn't matter)
- in {{computeReplicationWorkForBlock}}, it calls {{chooseSourceDatanode}} which populates
{{containingNodes}} with all three nodes regardless of decom status
- this is then passed through as {{chosenNodes}} in the call to {{chooseTarget}}
- maxNodesPerRack is then assigned (5-1)/2 + 2 = 4
- {{chooseTarget}} initializes {{result}} with these same three nodes
- {numOfResults}} is thus 3, and it calls {{chooseRandom}} rather than basing its decision
on local/remote racks
- {{chooseRandom}} is free to pick any two nodes since maxNodesPerRack is 4

Will attach a failing unit test momentarily

> All replicas for a block with repl=2 end up in same rack
> --------------------------------------------------------
>
>                 Key: HDFS-1480
>                 URL: https://issues.apache.org/jira/browse/HDFS-1480
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.20.2
>            Reporter: T Meyarivan
>
> It appears that all replicas of a block can end up in the same rack. The likelihood of
such replicas seems to be directly related to decommissioning of nodes. 
> Post rolling OS upgrade (decommission 3-10% of nodes, re-install etc, add them back)
of a running cluster, all replicas of about 0.16% of blocks ended up in the same rack.
> Hadoop Namenode UI etc doesn't seem to know about such incorrectly replicated blocks.
"hadoop fsck .." does report that the blocks must be replicated on additional racks.
> Looking at ReplicationTargetChooser.java, following seem suspect:
> snippet-01:
> {code}
>     int maxNodesPerRack =
>       (totalNumOfReplicas-1)/clusterMap.getNumOfRacks()+2;
> {code}
> snippet-02:
> {code}
>       case 2:
>         if (clusterMap.isOnSameRack(results.get(0), results.get(1))) {
>           chooseRemoteRack(1, results.get(0), excludedNodes,
>                            blocksize, maxNodesPerRack, results);
>         } else if (newBlock){
>           chooseLocalRack(results.get(1), excludedNodes, blocksize,
>                           maxNodesPerRack, results);
>         } else {
>           chooseLocalRack(writer, excludedNodes, blocksize,
>                           maxNodesPerRack, results);
>         }
>         if (--numOfReplicas == 0) {
>           break;
>         }
> {code}
> snippet-03:
> {code}
>     do {
>       DatanodeDescriptor[] selectedNodes =
>         chooseRandom(1, nodes, excludedNodes);
>       if (selectedNodes.length == 0) {
>         throw new NotEnoughReplicasException(
>                                              "Not able to place enough replicas");
>       }
>       result = (DatanodeDescriptor)(selectedNodes[0]);
>     } while(!isGoodTarget(result, blocksize, maxNodesPerRack, results));
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message