hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daniel Zhi (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (YARN-4676) Automatic and Asynchronous Decommissioning Nodes Status Tracking
Date Thu, 11 Aug 2016 21:13:20 GMT

     [ https://issues.apache.org/jira/browse/YARN-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Daniel Zhi updated YARN-4676:
-----------------------------
    Attachment: YARN-4676.021.patch

YARN-4676.021.patch address some of the most recent comments.
1. removed doc GracefulDecommission.md
2. The recent HostsFileReader change to use readLock and expose a copy of the content to caller
lead to inefficient code like isNodeValid() inside NodesListManager, where a copy of the content
is created for every single host check. That said, getExcludedHostsWithTimeout() suppose to
be removed given the lock protected getHostDetails(). I have fixed NodesListManager.isNodeValid()
to be efficient as well.
3. done to support list of hosts in <name> tag
4. all the read methods inside HostFileReader throws IOException because fileInputStream.close();
inside the finally block may throw IOException;
5. updated usage for -g|graceful.
6. updated comments hopefully it help. Basically a node without any running container might
had map container for an application that is still running. The cost to track 
"list of running applications that I was part of historically" exceeds the benefit so currently
such "idle" node will be DECOMMISSIONED and the affected map task will be rescheduled. If
such info in future can be efficient tracked/obtained from RmNodeImpl, then DecommissioningNodesWatcher
can leverage it.
7. the 60 seconds delay removal is to prevent such node from suddenly disappear from the status
log (currently debug-mode only) but instead will appear as DECOMMISSIONED before it is removed.
8. switched to use MonotonicClock;
9. 1) PollTimerTask is by default scheduled to run once every 20 seconds, without initial
delay. There is no tight loop; 
   2) normally NM heartbeat every second so the timeout tracking is every second, the stale
check logic guarantee a check for node missing heartbeat update for 30 seconds. The logic
only matters to terminated instance. That said, I don't see downside to reduce 30 second to
5 second, as long as to avoid node being regularly checked already. 
   3) I don't think I fully understand this point.

Note: I will be busy on new projects and won't be able to afford further iterations on this
JIRA unless it is an immediate bug and within next two weeks.

> Automatic and Asynchronous Decommissioning Nodes Status Tracking
> ----------------------------------------------------------------
>
>                 Key: YARN-4676
>                 URL: https://issues.apache.org/jira/browse/YARN-4676
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: resourcemanager
>    Affects Versions: 2.8.0
>            Reporter: Daniel Zhi
>            Assignee: Daniel Zhi
>              Labels: features
>         Attachments: GracefulDecommissionYarnNode.pdf, GracefulDecommissionYarnNode.pdf,
YARN-4676.004.patch, YARN-4676.005.patch, YARN-4676.006.patch, YARN-4676.007.patch, YARN-4676.008.patch,
YARN-4676.009.patch, YARN-4676.010.patch, YARN-4676.011.patch, YARN-4676.012.patch, YARN-4676.013.patch,
YARN-4676.014.patch, YARN-4676.015.patch, YARN-4676.016.patch, YARN-4676.017.patch, YARN-4676.018.patch,
YARN-4676.019.patch, YARN-4676.020.patch, YARN-4676.021.patch
>
>
> YARN-4676 implements an automatic, asynchronous and flexible mechanism to graceful decommission
> YARN nodes. After user issues the refreshNodes request, ResourceManager automatically
evaluates
> status of all affected nodes to kicks out decommission or recommission actions. RM asynchronously
> tracks container and application status related to DECOMMISSIONING nodes to decommission
the
> nodes immediately after there are ready to be decommissioned. Decommissioning timeout
at individual
> nodes granularity is supported and could be dynamically updated. The mechanism naturally
supports multiple
> independent graceful decommissioning “sessions” where each one involves different
sets of nodes with
> different timeout settings. Such support is ideal and necessary for graceful decommission
request issued
> by external cluster management software instead of human.
> DecommissioningNodeWatcher inside ResourceTrackingService tracks DECOMMISSIONING nodes
status automatically and asynchronously after client/admin made the graceful decommission
request. It tracks DECOMMISSIONING nodes status to decide when, after all running containers
on the node have completed, will be transitioned into DECOMMISSIONED state. NodesListManager
detect and handle include and exclude list changes to kick out decommission or recommission
as necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org


Mime
View raw message