From yarn-issues-return-140689-archive-asf-public=cust-asf.ponee.io@hadoop.apache.org Tue Mar 27 20:11:07 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id CFC3B18077B for ; Tue, 27 Mar 2018 20:11:06 +0200 (CEST) Received: (qmail 60515 invoked by uid 500); 27 Mar 2018 18:11:04 -0000 Mailing-List: contact yarn-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list yarn-issues@hadoop.apache.org Received: (qmail 60255 invoked by uid 99); 27 Mar 2018 18:11:04 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 27 Mar 2018 18:11:04 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 157D81808A1 for ; Tue, 27 Mar 2018 18:11:04 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -109.511 X-Spam-Level: X-Spam-Status: No, score=-109.511 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, KAM_ASCII_DIVIDERS=0.8, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id flqziPRJ23p8 for ; Tue, 27 Mar 2018 18:11:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 1AD935F3FF for ; Tue, 27 Mar 2018 18:11:02 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 432D0E0DE7 for ; Tue, 27 Mar 2018 18:11:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id EF2E32150B for ; Tue, 27 Mar 2018 18:11:00 +0000 (UTC) Date: Tue, 27 Mar 2018 18:11:00 +0000 (UTC) From: "Shane Kumpf (JIRA)" To: yarn-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (YARN-8037) CGroupsResourceCalculator logs excessive warnings on container relaunch MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/YARN-8037?page=3Dcom.atlassian.= jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D16416= 025#comment-16416025 ]=20 Shane Kumpf commented on YARN-8037: ----------------------------------- Thanks for the response [~haibochen]. I can appreciate that position and wi= ll close this issue. I'll note that handling the exception called out in YARN-8035 doesn't fix t= his, the "The process vanished in the interim" exceptions continue to repea= t every second. However, I agree that there is an underlying cause that fil= tering the exception doesn't solve. For relaunch, I think it makes sense to= stop the monitoring thread and I'll look into doing so as part of=C2=A0YAR= N-7973. =C2=A0 > CGroupsResourceCalculator logs excessive warnings on container relaunch > ----------------------------------------------------------------------- > > Key: YARN-8037 > URL: https://issues.apache.org/jira/browse/YARN-8037 > Project: Hadoop YARN > Issue Type: Bug > Reporter: Shane Kumpf > Priority: Major > > When a container is relaunched, the old process no longer exists. When us= ing the {{CGroupsResourceCalculator}} this results in the warning and excep= tion below being logged every second until the relaunch occurs, which is ex= cessive and filling up the logs. > {code:java} > 2018-03-16 14:30:33,438 WARN org.apache.hadoop.yarn.server.nodemanager.co= ntainermanager.linux.resources.CGroupsResourceCalculator: Failed to parse 1= 2844 > org.apache.hadoop.yarn.exceptions.YarnException: The process vanished in = the interim 12844 > at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resou= rces.CGroupsResourceCalculator.processFile(CGroupsResourceCalculator.java:3= 36) > at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resou= rces.CGroupsResourceCalculator.readTotalProcessJiffies(CGroupsResourceCalcu= lator.java:252) > at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resou= rces.CGroupsResourceCalculator.updateProcessTree(CGroupsResourceCalculator.= java:181) > at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resou= rces.CombinedResourceCalculator.updateProcessTree(CombinedResourceCalculato= r.java:52) > at org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Con= tainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:457) > Caused by: java.io.FileNotFoundException: /sys/fs/cgroup/cpu,cpuacct/hado= op-yarn/container_e01_1521209613260_0002_01_000002/cpuacct.stat (No such fi= le or directory) > at java.io.FileInputStream.open0(Native Method) > at java.io.FileInputStream.open(FileInputStream.java:195) > at java.io.FileInputStream.(FileInputStream.java:138) > at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resou= rces.CGroupsResourceCalculator.processFile(CGroupsResourceCalculator.java:3= 20) > ... 4 more > 2018-03-16 14:30:33,438 WARN org.apache.hadoop.yarn.server.nodemanager.co= ntainermanager.linux.resources.CGroupsResourceCalculator: Failed to parse c= groups /sys/fs/cgroup/memory/hadoop-yarn/container_e01_1521209613260_0002_0= 1_000002/memory.memsw.usage_in_bytes > org.apache.hadoop.yarn.exceptions.YarnException: The process vanished in = the interim 12844 > at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resou= rces.CGroupsResourceCalculator.processFile(CGroupsResourceCalculator.java:3= 36) > at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resou= rces.CGroupsResourceCalculator.getMemorySize(CGroupsResourceCalculator.java= :238) > at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resou= rces.CGroupsResourceCalculator.updateProcessTree(CGroupsResourceCalculator.= java:187) > at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resou= rces.CombinedResourceCalculator.updateProcessTree(CombinedResourceCalculato= r.java:52) > at org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Con= tainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:457) > Caused by: java.io.FileNotFoundException: /sys/fs/cgroup/memory/hadoop-ya= rn/container_e01_1521209613260_0002_01_000002/memory.usage_in_bytes (No suc= h file or directory) > at java.io.FileInputStream.open0(Native Method) > at java.io.FileInputStream.open(FileInputStream.java:195) > at java.io.FileInputStream.(FileInputStream.java:138) > at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resou= rces.CGroupsResourceCalculator.processFile(CGroupsResourceCalculator.java:3= 20) > ... 4 more{code} > We should consider moving the exception to debug to reduce the noise at a= minimum.=C2=A0Alternatively, it may make sense to=C2=A0stop=C2=A0the exist= ing {{MonitoringThread}} during relaunch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: yarn-issues-help@hadoop.apache.org