Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 32FE9200D51 for ; Fri, 22 Dec 2017 18:15:05 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 318D2160C1A; Fri, 22 Dec 2017 17:15:05 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 78049160C0A for ; Fri, 22 Dec 2017 18:15:04 +0100 (CET) Received: (qmail 9235 invoked by uid 500); 22 Dec 2017 17:15:03 -0000 Mailing-List: contact yarn-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list yarn-issues@hadoop.apache.org Received: (qmail 9224 invoked by uid 99); 22 Dec 2017 17:15:03 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 22 Dec 2017 17:15:03 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 0C49E1A0038 for ; Fri, 22 Dec 2017 17:15:03 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -100.002 X-Spam-Level: X-Spam-Status: No, score=-100.002 tagged_above=-999 required=6.31 tests=[RCVD_IN_DNSWL_NONE=-0.0001, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id mM8j9Icrn-LI for ; Fri, 22 Dec 2017 17:15:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id B1F3A5FB17 for ; Fri, 22 Dec 2017 17:15:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id C08B2E2585 for ; Fri, 22 Dec 2017 17:15:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 2A0B6240F6 for ; Fri, 22 Dec 2017 17:15:00 +0000 (UTC) Date: Fri, 22 Dec 2017 17:15:00 +0000 (UTC) From: "Sunil G (JIRA)" To: yarn-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Comment Edited] (YARN-7612) Add Placement Processor Framework MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Fri, 22 Dec 2017 17:15:05 -0000 [ https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301676#comment-16301676 ] Sunil G edited comment on YARN-7612 at 12/22/17 5:14 PM: --------------------------------------------------------- Thanks [~asuresh] bq.That should be OK right ? if it gets cleared up, then it will be picked up in the next allocate call. So code is something like this {code} 166 public void allocate(ApplicationAttemptId appAttemptId, 167 AllocateRequest request, AllocateResponse response) throws YarnException { 168 List schedulingRequests = 169 request.getSchedulingRequests(); 170 dispatchRequestsForPlacement(appAttemptId, schedulingRequests); 171 reDispatchRetryableRequests(appAttemptId); 172 schedulePlacedRequests(appAttemptId); {code} Here reDispatchRetryableRequests clears contents of requestsToRetry. Then *schedulePlacedRequests* is inturn calls addToRetryList which again picks up the same in one call flow? My question was to remove the second checks in addToRetryList method. bq.I was hoping to handle all these efficiency improvements in a separate JIRA (as am sure more will pop up once we start doing scalability tests) Yes. Makes sense to me. was (Author: sunilg): Thanks [~asuresh] bq.That should be OK right ? if it gets cleared up, then it will be picked up in the next allocate call. So code is something like this {code} 166 public void allocate(ApplicationAttemptId appAttemptId, 167 AllocateRequest request, AllocateResponse response) throws YarnException { 168 List schedulingRequests = 169 request.getSchedulingRequests(); 170 dispatchRequestsForPlacement(appAttemptId, schedulingRequests); 171 reDispatchRetryableRequests(appAttemptId); 172 schedulePlacedRequests(appAttemptId); {code} Here reDispatchRetryableRequests clears contents of requestsToRetry. Then *schedulePlacedRequests* is inturn calls addToRetryList which again picks up the same in one call flow? My question was to remove the second checks in addToRetryList method. > Add Placement Processor Framework > --------------------------------- > > Key: YARN-7612 > URL: https://issues.apache.org/jira/browse/YARN-7612 > Project: Hadoop YARN > Issue Type: Sub-task > Reporter: Arun Suresh > Assignee: Arun Suresh > Attachments: YARN-7612-YARN-6592.001.patch, YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, YARN-7612-YARN-6592.008.patch, YARN-7612-YARN-6592.009.patch, YARN-7612-YARN-6592.010.patch, YARN-7612-v2.wip.patch, YARN-7612.wip.patch > > > This introduces a Placement Processor and a Planning algorithm framework to handle placement constraints and scheduling requests from an app and places them on nodes. > The actual planning algorithm(s) will be handled in a YARN-7613. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: yarn-issues-help@hadoop.apache.org