hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wangda Tan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-4105) Capacity Scheduler headroom for DRF is wrong
Date Wed, 02 Sep 2015 22:48:46 GMT

    [ https://issues.apache.org/jira/browse/YARN-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14728183#comment-14728183

Wangda Tan commented on YARN-4105:

Patch LGTM too, thanks [~lichangleo]. Only one nit: could you update test comment to be:
bq. // app 1 ask for 10GB memory and 1 vcore,
bq. // allocates 10GB memory and 1 vcore to app 1.

Same to app2.

> Capacity Scheduler headroom for DRF is wrong
> --------------------------------------------
>                 Key: YARN-4105
>                 URL: https://issues.apache.org/jira/browse/YARN-4105
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacityscheduler
>    Affects Versions: 2.6.0
>            Reporter: Chang Li
>            Assignee: Chang Li
>         Attachments: YARN-4105.2.patch, YARN-4105.3.patch, YARN-4105.patch
> relate to the problem discussed in YARN-1857. But the min method is flawed when we are
using DRC. Have run into a real scenario in production where queueCapacity: <memory:1056256,
vCores:3750>, qconsumed: <memory:1054720, vCores:361>, consumed: <memory:125952,
vCores:170> limit: <memory:214016, vCores:755>.  headRoom calculation returns 88064
where there is only 1536 left in the queue because DRC effectively compare by vcores. It then
caused deadlock because RMcontainer allocator thought there is still space for mapper and
won't preempt a reducer in a full queue to schedule a mapper. Propose fix with componentwiseMin.

This message was sent by Atlassian JIRA

View raw message