From yarn-issues-return-152525-archive-asf-public=cust-asf.ponee.io@hadoop.apache.org Sun Sep 2 16:07:05 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 55916180651 for ; Sun, 2 Sep 2018 16:07:05 +0200 (CEST) Received: (qmail 82372 invoked by uid 500); 2 Sep 2018 14:07:04 -0000 Mailing-List: contact yarn-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list yarn-issues@hadoop.apache.org Received: (qmail 82361 invoked by uid 99); 2 Sep 2018 14:07:04 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 02 Sep 2018 14:07:04 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id E1EE11A1C53 for ; Sun, 2 Sep 2018 14:07:03 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -109.501 X-Spam-Level: X-Spam-Status: No, score=-109.501 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, KAM_ASCII_DIVIDERS=0.8, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id Flxge1KQbXWC for ; Sun, 2 Sep 2018 14:07:03 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 09CA25F490 for ; Sun, 2 Sep 2018 14:07:02 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id B8AE3E2613 for ; Sun, 2 Sep 2018 14:07:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 3568726B5B for ; Sun, 2 Sep 2018 14:07:00 +0000 (UTC) Date: Sun, 2 Sep 2018 14:07:00 +0000 (UTC) From: "niu (JIRA)" To: yarn-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Comment Edited] (YARN-8513) CapacityScheduler infinite loop when queue is near fully utilized MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/YARN-8513?page=3Dcom.atlassian.= jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D16601= 223#comment-16601223 ]=20 niu edited comment on YARN-8513 at 9/2/18 2:06 PM: --------------------------------------------------- Thanks. I will try it tomorrow.=20 was (Author: hustnn): Thank. I will try it tomorrow.=20 > CapacityScheduler infinite loop when queue is near fully utilized > ----------------------------------------------------------------- > > Key: YARN-8513 > URL: https://issues.apache.org/jira/browse/YARN-8513 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler, yarn > Affects Versions: 3.1.0, 2.9.1 > Environment: Ubuntu 14.04.5 and 16.04.4 > YARN is configured with one label and 5 queues. > Reporter: Chen Yufei > Priority: Major > Attachments: jstack-1.log, jstack-2.log, jstack-3.log, jstack-4.l= og, jstack-5.log, top-during-lock.log, top-when-normal.log, yarn3-jstack1.l= og, yarn3-jstack2.log, yarn3-jstack3.log, yarn3-jstack4.log, yarn3-jstack5.= log, yarn3-resourcemanager.log, yarn3-top > > > ResourceManager does not respond to any request when queue=C2=A0is near f= ully utilized sometimes. Sending SIGTERM won't stop RM, only SIGKILL can. A= fter RM=C2=A0restart, it can recover running jobs and start accepting new o= nes. > =C2=A0 > Seems like CapacityScheduler is in an infinite loop printing out the foll= owing log messages (more than 25,000 lines in a second): > =C2=A0 > {{2018-07-10 17:16:29,227 INFO org.apache.hadoop.yarn.server.resourcemana= ger.scheduler.capacity.ParentQueue: assignedContainer queue=3Droot usedCapa= city=3D0.99816763 absoluteUsedCapacity=3D0.99816763 used=3D cluster=3D}} > {{2018-07-10 17:16:29,227 INFO org.apache.hadoop.yarn.server.resourcemana= ger.scheduler.capacity.CapacityScheduler: Failed to accept allocation propo= sal}} > {{2018-07-10 17:16:29,227 INFO org.apache.hadoop.yarn.server.resourcemana= ger.scheduler.capacity.allocator.AbstractContainerAllocator: assignedContai= ner application attempt=3Dappattempt_1530619767030_1652_000001 container=3D= null queue=3Dorg.apache.hadoop.yarn.server.resourcemanager.scheduler.capaci= ty.allocator.RegularContainerAllocator@14420943 clusterResource=3D type=3DNODE_LOCAL requestedPartition=3D}} > =C2=A0 > I encounter this problem several times after upgrading to YARN 2.9.1, whi= le the same=C2=A0configuration works fine under version=C2=A02.7.3. > =C2=A0 > YARN-4477=C2=A0is an infinite loop bug in FairScheduler, not sure if this= is a similar problem. > =C2=A0 -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: yarn-issues-help@hadoop.apache.org