hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-4140) RM container allocation delayed incase of app submitted to Nodelabel partition
Date Mon, 28 Sep 2015 17:11:04 GMT

    [ https://issues.apache.org/jira/browse/YARN-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14933582#comment-14933582
] 

Hadoop QA commented on YARN-4140:
---------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m  2s | Findbugs (version ) appears to be broken on
trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any @author tags.
|
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to include 1 new
or modified test files. |
| {color:green}+1{color} | javac |   8m 12s | There were no new javac warning messages. |
| {color:green}+1{color} | javadoc |  10m 29s | There were no new javadoc warning messages.
|
| {color:green}+1{color} | release audit |   0m 26s | The applied patch does not increase
the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   0m 42s | The applied patch generated  39 new checkstyle
issues (total was 0, now 39). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that end in whitespace.
|
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with eclipse:eclipse.
|
| {color:green}+1{color} | findbugs |   1m 33s | The patch does not introduce any new Findbugs
(version 3.0.0) warnings. |
| {color:red}-1{color} | yarn tests |  61m 49s | Tests failed in hadoop-yarn-server-resourcemanager.
|
| | | 101m 29s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService
|
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | http://issues.apache.org/jira/secure/attachment/12764031/0012-YARN-4140.patch
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 892ade6 |
| checkstyle |  https://builds.apache.org/job/PreCommit-YARN-Build/9286/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt
|
| hadoop-yarn-server-resourcemanager test log | https://builds.apache.org/job/PreCommit-YARN-Build/9286/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
|
| Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/9286/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep
3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | https://builds.apache.org/job/PreCommit-YARN-Build/9286/console |


This message was automatically generated.

> RM container allocation delayed incase of app submitted to Nodelabel partition
> ------------------------------------------------------------------------------
>
>                 Key: YARN-4140
>                 URL: https://issues.apache.org/jira/browse/YARN-4140
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: api, client, resourcemanager
>            Reporter: Bibin A Chundatt
>            Assignee: Bibin A Chundatt
>         Attachments: 0001-YARN-4140.patch, 0002-YARN-4140.patch, 0003-YARN-4140.patch,
0004-YARN-4140.patch, 0005-YARN-4140.patch, 0006-YARN-4140.patch, 0007-YARN-4140.patch, 0008-YARN-4140.patch,
0009-YARN-4140.patch, 0010-YARN-4140.patch, 0011-YARN-4140.patch, 0012-YARN-4140.patch
>
>
> Trying to run application on Nodelabel partition I  found that the application execution
time is delayed by 5 – 10 min for 500 containers . Total 3 machines 2 machines were in same
partition and app submitted to same.
> After enabling debug was able to find the below
> # From AM the container ask is for OFF-SWITCH
> # RM allocating all containers to NODE_LOCAL as shown in logs below.
> # So since I was having about 500 containers time taken was about – 6 minutes to allocate
1st map after AM allocation.
> # Tested with about 1K maps using PI job took 17 minutes to allocate  next container
after AM allocation
> Once 500 container allocation on NODE_LOCAL is done the next container allocation is
done on OFF_SWITCH
> {code}
> 2015-09-09 15:21:58,954 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1441791998224_0001 request={Priority: 20, Capability:
<memory:512, vCores:1>, # Containers: 500, Location: /default-rack, Relax Locality:
true, Node Label Expression: }
> 2015-09-09 15:21:58,954 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1441791998224_0001 request={Priority: 20, Capability:
<memory:512, vCores:1>, # Containers: 500, Location: *, Relax Locality: true, Node Label
Expression: 3}
> 2015-09-09 15:21:58,954 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1441791998224_0001 request={Priority: 20, Capability:
<memory:512, vCores:1>, # Containers: 500, Location: host-10-19-92-143, Relax Locality:
true, Node Label Expression: }
> 2015-09-09 15:21:58,954 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
showRequests: application=application_1441791998224_0001 request={Priority: 20, Capability:
<memory:512, vCores:1>, # Containers: 500, Location: host-10-19-92-117, Relax Locality:
true, Node Label Expression: }
> 2015-09-09 15:21:58,954 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5, usedResources=<memory:0,
vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=1 -->
<memory:0, vCores:0>, NODE_LOCAL
> {code}
>  
> {code}
> 2015-09-09 14:35:45,467 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5, usedResources=<memory:0,
vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=1 -->
<memory:0, vCores:0>, NODE_LOCAL
> 2015-09-09 14:35:45,831 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5, usedResources=<memory:0,
vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=1 -->
<memory:0, vCores:0>, NODE_LOCAL
> 2015-09-09 14:35:46,469 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5, usedResources=<memory:0,
vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=1 -->
<memory:0, vCores:0>, NODE_LOCAL
> 2015-09-09 14:35:46,832 DEBUG org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
Assigned to queue: root.b.b1 stats: b1: capacity=1.0, absoluteCapacity=0.5, usedResources=<memory:0,
vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=1 -->
<memory:0, vCores:0>, NODE_LOCAL
> {code}
> {code}
> dsperf@host-127:/opt/bibin/dsperf/HAINSTALL/install/hadoop/resourcemanager/logs1>
cat hadoop-dsperf-resourcemanager-host-127.log | grep "NODE_LOCAL" | grep "root.b.b1" | wc
-l
> 500
> {code}
>  
> (Consumes about 6 minutes)
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message