hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chen Yufei (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-8513) CapacityScheduler infinite loop when queue is near fully utilized
Date Sun, 19 Aug 2018 06:37:00 GMT

    [ https://issues.apache.org/jira/browse/YARN-8513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16585036#comment-16585036

Chen Yufei commented on YARN-8513:

[~cheersyang] Thanks for looking into this issue.

The log message file is truncated (because mosts are repeated), the whole log size in a second
is 60MB and contains about 5300 lines of "Trying to schedule on node".

Cluster size:

* default partition: 79 NM
* sim: 45 NM
* gpu: 0 NM (we still have Hadoop 2.9.1 running and some nodes haven't join the new version
of Hadoop)

NM HB interval is not changed and is the default values 1000ms.

> CapacityScheduler infinite loop when queue is near fully utilized
> -----------------------------------------------------------------
>                 Key: YARN-8513
>                 URL: https://issues.apache.org/jira/browse/YARN-8513
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacity scheduler, yarn
>    Affects Versions: 3.1.0, 2.9.1
>         Environment: Ubuntu 14.04.5 and 16.04.4
> YARN is configured with one label and 5 queues.
>            Reporter: Chen Yufei
>            Priority: Major
>         Attachments: jstack-1.log, jstack-2.log, jstack-3.log, jstack-4.log, jstack-5.log,
top-during-lock.log, top-when-normal.log, yarn3-jstack1.log, yarn3-jstack2.log, yarn3-jstack3.log,
yarn3-jstack4.log, yarn3-jstack5.log, yarn3-resourcemanager.log, yarn3-top
> ResourceManager does not respond to any request when queue is near fully utilized sometimes.
Sending SIGTERM won't stop RM, only SIGKILL can. After RM restart, it can recover running
jobs and start accepting new ones.
> Seems like CapacityScheduler is in an infinite loop printing out the following log messages
(more than 25,000 lines in a second):
> {{2018-07-10 17:16:29,227 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue:
assignedContainer queue=root usedCapacity=0.99816763 absoluteUsedCapacity=0.99816763 used=<memory:16170624,
vCores:1577> cluster=<memory:29441544, vCores:5792>}}
> {{2018-07-10 17:16:29,227 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
Failed to accept allocation proposal}}
> {{2018-07-10 17:16:29,227 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.AbstractContainerAllocator:
assignedContainer application attempt=appattempt_1530619767030_1652_000001 container=null
clusterResource=<memory:29441544, vCores:5792> type=NODE_LOCAL requestedPartition=}}
> I encounter this problem several times after upgrading to YARN 2.9.1, while the same configuration
works fine under version 2.7.3.
> YARN-4477 is an infinite loop bug in FairScheduler, not sure if this is a similar problem.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org

View raw message