hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Kanter (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned to the AM
Date Tue, 05 Dec 2017 18:30:00 GMT

    [ https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279006#comment-16279006
] 

Robert Kanter commented on YARN-6483:
-------------------------------------

YARN-7162 is the one that actually removes the XML parsing code.  There's more details on
YARN-7162, but in a nutshell, we didn't want to get locked into supporting this exact XML
formatting for the excludes file, because it could change once YARN-5536 is completed, which
aims to add a JSON format, and make the format pluggable.  Not shipping the current XML format
in 3.0 allows us to do that.

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned
to the AM
> ------------------------------------------------------------------------------------------------
>
>                 Key: YARN-6483
>                 URL: https://issues.apache.org/jira/browse/YARN-6483
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: resourcemanager
>            Reporter: Juan Rodríguez Hortalá
>            Assignee: Juan Rodríguez Hortalá
>             Fix For: 3.1.0
>
>         Attachments: YARN-6483-v1.patch, YARN-6483.002.patch, YARN-6483.003.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful decommissioning
mechanism to give time for tasks to complete in a node that is scheduled for decommission,
and for reducer tasks to read the shuffle blocks in that node. Also, YARN effectively blacklists
nodes in DECOMMISSIONING state by assigning them a capacity of 0, to prevent additional containers
to be launched in those nodes, so no more shuffle blocks are written to the node. This blacklisting
is not effective for applications like Spark, because a Spark executor running in a YARN container
will keep receiving more tasks after the corresponding node has been blacklisted at the YARN
level. We would like to propose a modification of the YARN heartbeat mechanism so nodes transitioning
to DECOMMISSIONING are added to the list of updated nodes returned by the Resource Manager
as a response to the Application Master heartbeat. This way a Spark application master would
be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org


Mime
View raw message