hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Naganarasimha G R (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-4140) RM container allocation delayed incase of app submitted to Nodelabel partition
Date Sun, 13 Sep 2015 22:43:46 GMT

    [ https://issues.apache.org/jira/browse/YARN-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14742710#comment-14742710
] 

Naganarasimha G R commented on YARN-4140:
-----------------------------------------

hi [~bibinchundatt],
Thanks for updating with a patch. Just took a high level look at it. Seems like we can optimize
a bit here as node labels might not be always set.
Currently you are always looping through twice, instead first while checking for any Requests
and we can populate anyPrioritymap map with additional check 
{code}
(null != anyResourceRequest.getNodeLabelExpression())
            && (!anyResourceRequest.getNodeLabelExpression().equals(
                RMNodeLabelsManager.NO_LABEL))
{code}
after this loop if the map contains any element then we can loop for updating node & rack
local request with NodeLabelExpression
Also HashMap<Priority, ResourceRequest> anyPrioritymap => Map<Priority, String>
priorityToNodeLabelMapping 

> RM container allocation delayed incase of app submitted to Nodelabel partition
> ------------------------------------------------------------------------------
>
>                 Key: YARN-4140
>                 URL: https://issues.apache.org/jira/browse/YARN-4140
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: api, client, resourcemanager
>            Reporter: Bibin A Chundatt
>            Assignee: Bibin A Chundatt
>         Attachments: 0001-YARN-4140.patch
>
>
> Trying to run application on Nodelabel partition I  found that the application execution
time is delayed by 5 – 10 min for 500 containers . Total 3 machines 2 machines were in same
partition and app submitted to same.
> After enabling debug was able to find the below
> # From AM the container ask is for OFF-SWITCH
> # RM allocating all containers to NODE_LOCAL as shown in logs below.
> # So since I was having about 500 containers time taken was about – 6 minutes to allocate
1st map after AM allocation.
> # Tested with about 1K maps using PI job took 17 minutes to allocate  next container
after AM allocation
> Once 500 container allocation on NODE_LOCAL is done the next container allocation is
done on OFF_SWITCH
> {code}
> 2015-09-09 15:21:58,954 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1441791998224_0001 request={Priority: 20, Capability:
<memory:512, vCores:1>, # Containers: 500, Location: /default-rack, Relax Locality:
true, Node Label Expression: }
> 2015-09-09 15:21:58,954 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1441791998224_0001 request={Priority: 20, Capability:
<memory:512, vCores:1>, # Containers: 500, Location: *, Relax Locality: true, Node Label
Expression: 3}
> 2015-09-09 15:21:58,954 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1441791998224_0001 request={Priority: 20, Capability:
<memory:512, vCores:1>, # Containers: 500, Location: host-10-19-92-143, Relax Locality:
true, Node Label Expression: }
> 2015-09-09 15:21:58,954 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1441791998224_0001 request={Priority: 20, Capability:
<memory:512, vCores:1>, # Containers: 500, Location: host-10-19-92-117, Relax Locality:
true, Node Label Expression: }
> 2015-09-09 15:21:58,954 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5, usedResources=<memory:0,
vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=1 -->
<memory:0, vCores:0>, NODE_LOCAL
> {code}
>  
> {code}
> 2015-09-09 14:35:45,467 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5, usedResources=<memory:0,
vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=1 -->
<memory:0, vCores:0>, NODE_LOCAL
> 2015-09-09 14:35:45,831 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5, usedResources=<memory:0,
vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=1 -->
<memory:0, vCores:0>, NODE_LOCAL
> 2015-09-09 14:35:46,469 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5, usedResources=<memory:0,
vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=1 -->
<memory:0, vCores:0>, NODE_LOCAL
> 2015-09-09 14:35:46,832 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5, usedResources=<memory:0,
vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=1 -->
<memory:0, vCores:0>, NODE_LOCAL
> {code}
> {code}
> dsperf@host-127:/opt/bibin/dsperf/HAINSTALL/install/hadoop/resourcemanager/logs1>
cat hadoop-dsperf-resourcemanager-host-127.log | grep "NODE_LOCAL" | grep "root.b.b1" | wc
-l
> 500
> {code}
>  
> (Consumes about 6 minutes)
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message