From yarn-issues-return-110646-apmail-hadoop-yarn-issues-archive=hadoop.apache.org@hadoop.apache.org Thu Mar 23 22:52:32 2017 Return-Path: X-Original-To: apmail-hadoop-yarn-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-yarn-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 02CC319EAC for ; Thu, 23 Mar 2017 22:52:32 +0000 (UTC) Received: (qmail 26893 invoked by uid 500); 23 Mar 2017 22:52:30 -0000 Delivered-To: apmail-hadoop-yarn-issues-archive@hadoop.apache.org Received: (qmail 26852 invoked by uid 500); 23 Mar 2017 22:52:30 -0000 Mailing-List: contact yarn-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list yarn-issues@hadoop.apache.org Received: (qmail 26711 invoked by uid 99); 23 Mar 2017 22:52:30 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 23 Mar 2017 22:52:30 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 073111889EF for ; Thu, 23 Mar 2017 22:52:30 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id fnDx68PrQkQ2 for ; Thu, 23 Mar 2017 22:52:29 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id A75B75FB6A for ; Thu, 23 Mar 2017 22:52:28 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 5EC5DE045B for ; Thu, 23 Mar 2017 22:52:23 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 4C79321DDE for ; Thu, 23 Mar 2017 22:51:42 +0000 (UTC) Date: Thu, 23 Mar 2017 22:51:42 +0000 (UTC) From: "ASF GitHub Bot (JIRA)" To: yarn-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (YARN-5829) FS preemption should reserve a node before considering containers on it for preemption MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/YARN-5829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15939378#comment-15939378 ] ASF GitHub Bot commented on YARN-5829: -------------------------------------- Github user kambatla commented on a diff in the pull request: https://github.com/apache/hadoop/pull/201#discussion_r107801908 --- Diff: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java --- @@ -987,9 +978,19 @@ void attemptScheduling(FSSchedulerNode node) { } // Assign new containers... - // 1. Check for reserved applications - // 2. Schedule if there are no reservations - + // 1. Ensure containers are assigned to the apps that preempted + // 2. Check for reserved applications + // 3. Schedule if there are no reservations + + // Apps may wait for preempted containers + // We have to satisfy these first to avoid cases, when we preempt + // a container for A from B and C gets the preempted containers, + // when C does not qualify for preemption itself. + for (FSAppAttempt app : node.getPreemptionList()) { --- End diff -- We don't seem to handle the case where an app is to be allocated multiple containers from based on the preempted resources. Would it help to store the apps in a TreeMap in FSSchedulerNode and have a method that returns the next preempted app to allocate to? If none exist, return null? Also, if this were to work, we might have to revert back to the behavior in an earlier patch: add app/resources to the map only after the container is actually preempted? > FS preemption should reserve a node before considering containers on it for preemption > -------------------------------------------------------------------------------------- > > Key: YARN-5829 > URL: https://issues.apache.org/jira/browse/YARN-5829 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler > Reporter: Karthik Kambatla > Assignee: Miklos Szegedi > > FS preemption evaluates nodes for preemption, and subsequently preempts identified containers. If this node is not reserved for a specific application, any other application could be allocated resources on this node. > Reserving the node for the starved application before preempting containers would help avoid this. -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: yarn-issues-help@hadoop.apache.org