Return-Path: X-Original-To: apmail-hadoop-yarn-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-yarn-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B168A10ED4 for ; Tue, 3 Dec 2013 02:32:35 +0000 (UTC) Received: (qmail 91414 invoked by uid 500); 3 Dec 2013 02:32:35 -0000 Delivered-To: apmail-hadoop-yarn-issues-archive@hadoop.apache.org Received: (qmail 91379 invoked by uid 500); 3 Dec 2013 02:32:35 -0000 Mailing-List: contact yarn-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: yarn-issues@hadoop.apache.org Delivered-To: mailing list yarn-issues@hadoop.apache.org Received: (qmail 91371 invoked by uid 99); 3 Dec 2013 02:32:35 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 03 Dec 2013 02:32:35 +0000 Date: Tue, 3 Dec 2013 02:32:35 +0000 (UTC) From: "qingwu.fu (JIRA)" To: yarn-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (YARN-1458) In Fair Scheduler, size based weight can cause update thread to hold lock indefinitely MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/YARN-1458?page=3Dcom.atlassian= .jira.plugin.system.issuetabpanels:all-tabpanel ] qingwu.fu updated YARN-1458: ---------------------------- Attachment: YARN-1458.patch In the Fair Scheduler, if size based weight is turned on, it will lead to e= ndless loop in "ComputeFairShares.computeShares (ComputeFairShares.java:102= )" that if all app's require resource in one queue is 0. This patch deals with that situation, we let the program jump out of the lo= op if all app's require resource of one queue is 0. That means set that que= ue's require resource to 0 > In Fair Scheduler, size based weight can cause update thread to hold lock= indefinitely > -------------------------------------------------------------------------= ------------- > > Key: YARN-1458 > URL: https://issues.apache.org/jira/browse/YARN-1458 > Project: Hadoop YARN > Issue Type: Bug > Components: scheduler > Affects Versions: 2.2.0 > Environment: Centos 2.6.18-238.19.1.el5 X86_64 > hadoop2.2.0 > Reporter: qingwu.fu > Labels: patch > Fix For: 2.2.1 > > Attachments: YARN-1458.patch > > Original Estimate: 408h > Remaining Estimate: 408h > > The ResourceManager$SchedulerEventDispatcher$EventProcessor blocked when = clients submit lots jobs, it is not easy to reapear. We run the test cluste= r for days to reapear it. The output of jstack command on resourcemanager = pid: > {code} > "ResourceManager Event Processor" prio=3D10 tid=3D0x00002aaab0c5f000 nid= =3D0x5dd3 waiting for monitor entry [0x0000000043aa9000] > java.lang.Thread.State: BLOCKED (on object monitor) > at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.F= airScheduler.removeApplication(FairScheduler.java:671) > - waiting to lock <0x000000070026b6e0> (a org.apache.hadoop.yarn.= server.resourcemanager.scheduler.fair.FairScheduler) > at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.F= airScheduler.handle(FairScheduler.java:1023) > at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.F= airScheduler.handle(FairScheduler.java:112) > at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$= SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:440) > at java.lang.Thread.run(Thread.java:744) > =E2=80=A6=E2=80=A6 > "FairSchedulerUpdateThread" daemon prio=3D10 tid=3D0x00002aaab0a2c800 nid= =3D0x5dc8 runnable [0x00000000433a2000] > java.lang.Thread.State: RUNNABLE > at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.F= airScheduler.getAppWeight(FairScheduler.java:545) > - locked <0x000000070026b6e0> (a org.apache.hadoop.yarn.server.re= sourcemanager.scheduler.fair.FairScheduler) > at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.A= ppSchedulable.getWeights(AppSchedulable.java:129) > at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.p= olicies.ComputeFairShares.computeShare(ComputeFairShares.java:143) > at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.p= olicies.ComputeFairShares.resourceUsedWithWeightToResourceRatio(ComputeFair= Shares.java:131) > at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.p= olicies.ComputeFairShares.computeShares(ComputeFairShares.java:102) > at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.p= olicies.FairSharePolicy.computeShares(FairSharePolicy.java:119) > at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.F= SLeafQueue.recomputeShares(FSLeafQueue.java:100) > at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.F= SParentQueue.recomputeShares(FSParentQueue.java:62) > at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.F= airScheduler.update(FairScheduler.java:282) > - locked <0x000000070026b6e0> (a org.apache.hadoop.yarn.server.re= sourcemanager.scheduler.fair.FairScheduler) > at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.F= airScheduler$UpdateThread.run(FairScheduler.java:255) > at java.lang.Thread.run(Thread.java:744) > {code} -- This message was sent by Atlassian JIRA (v6.1#6144)