Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id C277E166E91 for ; Tue, 25 Jul 2017 17:15:05 +0200 (CEST) Received: (qmail 48100 invoked by uid 500); 25 Jul 2017 15:15:05 -0000 Mailing-List: contact yarn-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list yarn-issues@hadoop.apache.org Received: (qmail 48089 invoked by uid 99); 25 Jul 2017 15:15:04 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Jul 2017 15:15:04 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 5104B1A0879 for ; Tue, 25 Jul 2017 15:15:04 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id RWdcErWqBp4b for ; Tue, 25 Jul 2017 15:15:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 4BC395FE1A for ; Tue, 25 Jul 2017 15:15:02 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id AF9A3E0DDB for ; Tue, 25 Jul 2017 15:15:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 24FD023F1D for ; Tue, 25 Jul 2017 15:15:01 +0000 (UTC) Date: Tue, 25 Jul 2017 15:15:01 +0000 (UTC) From: "Eric Payne (JIRA)" To: yarn-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16100167#comment-16100167 ] Eric Payne commented on YARN-5892: ---------------------------------- Thanks [~sunilg] for the review. {quote} In ActiveUsersManager, could we avoid activeUsersChanged if possible. May be we could keep an active set in ActiveUsersManager itself and we could clear this set when activate/deactivateApplication is invoked. {quote} In {{ActiveUsersManager}}, the keyset of {{usersApplications}} already holds a list of active users, but it's not enough to know whether or not there are users in this keyset. {{LeafQueue}} also needs to know when a users is added to or removed from this set. This is so that {{LeafQueue#computUserLimit}} is only recomputing the sum of the active user's weights after a user goes active or inactive, not every time through the code. {{LeafQueue}} could possibly keep its own copy of {{ActiveUsersManager#getNumActiveUsers}} and then compare that with {{getNumActiveUsers}} when it needs to know if it must recompute the sum of active user weights. However, that seems more complicated and error prone than using {{activeUsersChanged}}. > Support user-specific minimum user limit percentage in Capacity Scheduler > ------------------------------------------------------------------------- > > Key: YARN-5892 > URL: https://issues.apache.org/jira/browse/YARN-5892 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacityscheduler > Reporter: Eric Payne > Assignee: Eric Payne > Fix For: 3.0.0-alpha3 > > Attachments: Active users highlighted.jpg, YARN-5892.001.patch, YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, YARN-5892.012.patch, YARN-5892.013.patch, YARN-5892.014.patch, YARN-5892.015.patch, YARN-5892.branch-2.015.patch, YARN-5892.branch-2.016.patch, YARN-5892.branch-2.8.016.patch, YARN-5892.branch-2.8.017.patch, YARN-5892.branch-2.8.018.patch > > > Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} property is per queue. A cluster admin should be able to set the minimum user limit percent on a per-user basis within the queue. > This functionality is needed so that when intra-queue preemption is enabled (YARN-4945 / YARN-2113), some users can be deemed as more important than other users, and resources from VIP users won't be as likely to be preempted. > For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like this: > {code} > > yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent > 25 > > > yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent > 75 > > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: yarn-issues-help@hadoop.apache.org