hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jian He (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-4390) Do surgical preemption based on reserved container in CapacityScheduler
Date Tue, 26 Apr 2016 02:12:13 GMT

    [ https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15257433#comment-15257433

Jian He commented on YARN-4390:

that sounds ok to me. some other comments

- why does this need to be volatile ? totalKillableResources reference is not updated anywhere

     private volatile Resource totalKillableResources = Resource.newInstance(0, 0);

- I think the reason why the getters for these two fields are synchrnozied before is to make
sure they are consistent with each other(i.e. updated together)
The patch removed the synchrnozed keyword for these two fields. This will lead to inconsistent
view to the callers 
  private volatile Resource unallocatedResource = Resource.newInstance(0, 0);
  private volatile Resource totalResource;
- why is this exception changed to be ignored ? (combine this try-catch clause with the one
          try {
          //invoke the preemption policy at a regular pace
          //the policy will generate preemption or kill events
          //managed by the dispatcher
        } catch (YarnRuntimeException e) {
          LOG.error("YarnRuntimeException raised while executing preemption"
              + " checker, skip this run..., exception=", e);
- code does not match comment? comment says not considersReservedResourceWhenCalculateIdeal
     * So only compute scalingFactor when we're not doing reserved container
     * preemption. {@link ReservedContainerCandidateSelector} will limit total
     * preempted resources less than permitted.
    float scalingFactor = 1.0F;
    if (considersReservedResourceWhenCalculateIdeal && Resources.greaterThan(rc,
        tot_guarant, totPreemptionNeeded, totalPreemptionAllowed)) {
      scalingFactor = Resources.divide(rc, tot_guarant, totalPreemptionAllowed,

> Do surgical preemption based on reserved container in CapacityScheduler
> -----------------------------------------------------------------------
>                 Key: YARN-4390
>                 URL: https://issues.apache.org/jira/browse/YARN-4390
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: capacity scheduler
>    Affects Versions: 3.0.0, 2.8.0, 2.7.3
>            Reporter: Eric Payne
>            Assignee: Wangda Tan
>         Attachments: YARN-4390-design.1.pdf, YARN-4390-test-results.pdf, YARN-4390.1.patch,
YARN-4390.2.patch, YARN-4390.3.branch-2.patch, YARN-4390.3.patch, YARN-4390.4.patch, YARN-4390.5.patch,
> There are multiple reasons why preemption could unnecessarily preempt containers. One
is that an app could be requesting a large container (say 8-GB), and the preemption monitor
could conceivably preempt multiple containers (say 8, 1-GB containers) in order to fill the
large container request. These smaller containers would then be rejected by the requesting
AM and potentially given right back to the preempted app.

This message was sent by Atlassian JIRA

View raw message