hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo Nicholas Sze (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDDS-699) Detect Ozone Network topology
Date Thu, 07 Mar 2019 00:38:00 GMT

    [ https://issues.apache.org/jira/browse/HDDS-699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786248#comment-16786248
] 

Tsz Wo Nicholas Sze commented on HDDS-699:
------------------------------------------

Thanks [~Sammi].  Just found that the logic for getLeaf(int leafIndex, String excludedScope,
Collection<Node> excludedNodes, int ancestorGen) is quite complicated.
- First of all, let's consistently use "level" instead of "generation" in the code.  In the
getLeaf methods, let's rename ancestorGen to numLevelToExclude.
- excludedScope and excludedNodes may overlap so that we should filter out the overlapped
nodes.
{code}
  public Node getLeaf(int leafIndex, String excludedScope,
      Collection<Node> excludedNodes, int numLevelToExclude) {
    Preconditions.checkArgument(leafIndex >= 0 && numLevelToExclude >= 0);
    if (excludedScope != null) {
      excludedNodes = excludedNodes.stream()
          .filter(n -> !n.getNetworkFullPath().startsWith(excludedScope))
          .collect(Collectors.toList());
    }
    return getLeafRecursively(leafIndex, excludedScope, excludedNodes, numLevelToExclude);
// see below for getLeafRecursively
  }
{code}
- Let's change getAncestor(0) to return this. It will simplify the code.
{code}
  public Node getAncestor(int generation) {
    Preconditions.checkArgument(generation >= 0);
    Node current = this;
    while (generation > 0 && current != null) {
      current = current.getParent();
      generation--;
    }
    return current;
  }
{code}
- Then, we need to take care the excluded node counting with numLevelToExclude.  getAncestorNodeMap
seems incorrect since it does not consider numLevelToExclude.  When consider numLevelToExclude,
two excluded nodes under the same ancestor may or may not overlap.  We should filter out the
overlap first as below.
{code}
  /**
   * @return a map: ancestor-node  -> node-count, where
   *         the ancestor-node corresponds to the levelToReturn, and
   *         the node-count corresponds to the levelToCount.
   */
  private Map<Node, Integer> getAncestorCounts(Collection<Node> nodes,
      int levelToReturn, int levelToCount) {
    Preconditions.checkState(levelToReturn >= levelToCount);
    if (nodes == null || nodes.size() == 0) {
      return Collections.emptyMap();
    }

    // map: levelToCount -> levelToReturn
    final Map<Node, Node> map = new HashMap<>();
    for (Node node: nodes) {
      final Node toReturn = node.getAncestor(levelToReturn);
      final Node toCount = levelToCount == levelToReturn ? toReturn: node.getAncestor(levelToCount);
      map.putIfAbsent(toCount, toReturn);
    }

    // map: levelToReturn -> counts
    final Map<Node, Integer> counts = new HashMap<>();
    for (Map.Entry<Node, Node> entry : map.entrySet()) {
      final Node toCount = entry.getKey();
      final Node toReturn = entry.getValue();
      counts.compute(toReturn, (key, n) -> (n == null? 0: n) + toCount.getNumOfLeaves());
    }

    return counts;
  }
{code}

- Finally, here is the getLeafRecursively(..).  The other getLeaf methods can be removed.
{code}
  private Node getLeafRecursively(int leafIndex, String excludedScope,
      Collection<Node> excludedNodes, int numLevelToExclude) {
    if (isLeafParent()) {
      return getLeafOnLeafParent(leafIndex, excludedScope, excludedNodes);
    }

    final int levelToReturn = NodeSchemaManager.getInstance().getMaxLevel() - getLevel() -
1;
    final Map<Node, Integer> excludedAncestors = getAncestorCounts(
        excludedNodes, levelToReturn, numLevelToExclude);
    final int excludedScopeCount = getScopeNodeCount(excludedScope);


    for(Node node : childrenMap.values()) {
      int leafCount = node.getNumOfLeaves();
      if (excludedScope != null && excludedScope.startsWith(node.getNetworkFullPath()))
{
        leafCount -= excludedScopeCount;
      }
      leafCount -= excludedAncestors.get(node);
      if (leafCount > 0) {
        if (leafIndex < leafCount) {
          return ((InnerNodeImpl)node).getLeafRecursively(
              leafIndex, excludedScope, excludedNodes, numLevelToExclude);
        }
        leafIndex -= leafCount;
      }
    }
    return null;
  }
{code}


> Detect Ozone Network topology
> -----------------------------
>
>                 Key: HDDS-699
>                 URL: https://issues.apache.org/jira/browse/HDDS-699
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>            Reporter: Xiaoyu Yao
>            Assignee: Sammi Chen
>            Priority: Major
>         Attachments: HDDS-699.00.patch, HDDS-699.01.patch, HDDS-699.02.patch, HDDS-699.03.patch,
HDDS-699.04.patch, HDDS-699.05.patch
>
>
> Traditionally this has been implemented in Hadoop via script or customizable java class.
One thing we want to add here is the flexible multi-level support instead of fixed levels
like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message