hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wangda Tan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-3769) Preemption occurring unnecessarily because preemption doesn't consider user limit
Date Wed, 02 Sep 2015 23:56:46 GMT

    [ https://issues.apache.org/jira/browse/YARN-3769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14728251#comment-14728251

Wangda Tan commented on YARN-3769:


Thanks for working on the patch, the approach general looks good. Few comments on implementation:

{{getTotalResourcePending}} is misleading, I suggest to rename it to something like {{getTotalResourcePendingConsideredUserLimit}},
and add a comment to indicate it will be only used by preemption policy.

And for implementation:
I think it's no need to store a appsPerUser. It will be a O(apps-in-the-queue) memory cost,
and you need O(apps-in-the-queue) insert opertions as well. Instead, you can do following
Map<UserName, Headroom> userNameToHeadroom;

Resource userLimit = computeUserLimit(partition);
Resource pendingAndPreemptable = 0;

for (app in apps) {
	if (!userNameToHeadroom.contains(app.getUser())) {
		userNameToHeadroom.put(app.getUser(), userLimit - app.getUser().getUsed(partition));
	pendingAndPreemptable += min(userNameToHeadroom.get(app.getUser()), app.getPending(partition));
	userNameToHeadroom.get(app.getUser()) -= app.getPending(partition);

return pendingAndPreemptable;

And could you add a test to verify it works?

> Preemption occurring unnecessarily because preemption doesn't consider user limit
> ---------------------------------------------------------------------------------
>                 Key: YARN-3769
>                 URL: https://issues.apache.org/jira/browse/YARN-3769
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacityscheduler
>    Affects Versions: 2.6.0, 2.7.0, 2.8.0
>            Reporter: Eric Payne
>            Assignee: Eric Payne
>         Attachments: YARN-3769.001.branch-2.7.patch, YARN-3769.001.branch-2.8.patch
> We are seeing the preemption monitor preempting containers from queue A and then seeing
the capacity scheduler giving them immediately back to queue A. This happens quite often and
causes a lot of churn.

This message was sent by Atlassian JIRA

View raw message