hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhijie Shen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-1071) ResourceManager's decommissioned and lost node count is 0 after restart
Date Thu, 20 Feb 2014 20:09:24 GMT

    [ https://issues.apache.org/jira/browse/YARN-1071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13907437#comment-13907437
] 

Zhijie Shen commented on YARN-1071:
-----------------------------------

The approach should fix the problem here. Some minor comments:

1. It's better to use System.getProperty("line.separator") to replace "\n"
{code}
+        fStream.write("\n".getBytes());
{code}

2. Put the setter in HostsFileReader#refresh(2params) instead?
{code}
+    ClusterMetrics.getMetrics().setDecommisionedNMs(excludeList.size());
{code}

3. Check the ip as well as we do in NodesListManager#isValidNode?
{code}
+      if (!context.getNodesListManager().getHostsReader().getExcludedHosts()
+        .contains(hostName)) {
{code}

4. In the test of testDecomissionedNMsMetricsOnRMRestart, is it good to involve a nm which
has been decommissioned before restart, and it will not corrupt the count after restart?

In addition to the whitelist scenario, there's another one that the approach may not handle:
a. host1 in blacklist
b. refresh node, count = 1
c. rm stops
d. *blacklist change, host2 replaces host1 in blacklist*
e. rm starts
f. count = 1, however, actually both host1 and host2 are decommissioned

Not sure changing blacklist during between rm stop and start will be a common case. Probably
we don't want to deal with it now.

> ResourceManager's decommissioned and lost node count is 0 after restart
> -----------------------------------------------------------------------
>
>                 Key: YARN-1071
>                 URL: https://issues.apache.org/jira/browse/YARN-1071
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: resourcemanager
>    Affects Versions: 2.1.0-beta
>            Reporter: Srimanth Gunturi
>            Assignee: Jian He
>         Attachments: YARN-1071.1.patch, YARN-1071.2.patch, YARN-1071.3.patch
>
>
> I had 6 nodes in a cluster with 2 NMs stopped. Then I put a host into YARN's {{yarn.resourcemanager.nodes.exclude-path}}.
After running {{yarn rmadmin -refreshNodes}}, RM's JMX correctly showed decommissioned node
count:
> {noformat}
> "NumActiveNMs" : 3,
> "NumDecommissionedNMs" : 1,
> "NumLostNMs" : 2,
> "NumUnhealthyNMs" : 0,
> "NumRebootedNMs" : 0
> {noformat}
> After restarting RM, the counts were shown as below in JMX.
> {noformat}
> "NumActiveNMs" : 3,
> "NumDecommissionedNMs" : 0,
> "NumLostNMs" : 0,
> "NumUnhealthyNMs" : 0,
> "NumRebootedNMs" : 0
> {noformat}
> Notice that the lost and decommissioned NM counts are both 0.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message