hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wangda Tan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-2056) Disable preemption at Queue level
Date Thu, 13 Nov 2014 00:34:34 GMT

    [ https://issues.apache.org/jira/browse/YARN-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208990#comment-14208990

Wangda Tan commented on YARN-2056:

Minor comments:
1) Typo of comment in cloneQueue:
bq. +        // I have no untouchable over-capicity (that is, all of my over-
capici.. -> capacity

+        // If my children's preemptable over-capacity is >= my over-capacity,
+        // I have no untouchable over-capicity (that is, all of my over-
+        // capacity resources are preemptable). Otherwise, to calculate my
+        // untouchable over-capacity, subtract my children's preemptable over-
+        // capacity from my over-capacity.
The logic should be correct, but I think it might be simpler to say: untouchableExtra = max(extra
- childrenPreemptable, 0) and as same as the code. :)

bq. +    public double getIdealPctOfGuaranteed(TempQueue q)
The method doesn't need to be public anymore

Does it possible there's only one queue in getMostUnderservedQueues? If so, you should check
if q2 is null

Included review for tests in this turn:
1) testDisablePreemptionOverCapPlusPending
Should disable queueB instead of queueA? Currently, the test will preempt from appB not matter
if preemption disabled for queueA or not.

2) changes for testHierarchicalLarge:
-    verify(mDisp, times(9)).handle(argThat(new IsPreemptionRequestFor(appA)));
-    verify(mDisp, times(4)).handle(argThat(new IsPreemptionRequestFor(appE)));
+    verify(mDisp, times(6)).handle(argThat(new IsPreemptionRequestFor(appA)));
+    verify(mDisp, times(6)).handle(argThat(new IsPreemptionRequestFor(appE)));
I'm a little concern about this change, even if we considering round error, appA should be
taken about 9-10 resources, 9->6 seems some potential bug caused issue, could you double
check if it works as expected? (Without affect the normal preemption logic).

3) As above
-    // Since AM containers of appC and appD are saved, 2 containers from appB
+    // Since AM containers of appC and appD are saved, 1 containers from appB
     // has to be preempted
This is also a dangerous change, since it breaks the original expectation about skip AM container

I suggest you can take some investigation about why some original numbers need to be changed,
if it is just a round problem, that should be fine, but we should avoid behavior changes.

> Disable preemption at Queue level
> ---------------------------------
>                 Key: YARN-2056
>                 URL: https://issues.apache.org/jira/browse/YARN-2056
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: resourcemanager
>    Affects Versions: 2.4.0
>            Reporter: Mayank Bansal
>            Assignee: Eric Payne
>         Attachments: YARN-2056.201408202039.txt, YARN-2056.201408260128.txt, YARN-2056.201408310117.txt,
YARN-2056.201409022208.txt, YARN-2056.201409181916.txt, YARN-2056.201409210049.txt, YARN-2056.201409232329.txt,
YARN-2056.201409242210.txt, YARN-2056.201410132225.txt, YARN-2056.201410141330.txt, YARN-2056.201410232244.txt,
YARN-2056.201410311746.txt, YARN-2056.201411041635.txt, YARN-2056.201411072153.txt, YARN-2056.201411122305.txt
> We need to be able to disable preemption at individual queue level

This message was sent by Atlassian JIRA

View raw message