From yarn-issues-return-135432-archive-asf-public=cust-asf.ponee.io@hadoop.apache.org Sun Jan 21 04:36:16 2018 Return-Path: X-Original-To: archive-asf-public@eu.ponee.io Delivered-To: archive-asf-public@eu.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by mx-eu-01.ponee.io (Postfix) with ESMTP id F346118066D for ; Sun, 21 Jan 2018 04:36:15 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id D5EDB160C3B; Sun, 21 Jan 2018 03:36:15 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id A5EAB160C38 for ; Sun, 21 Jan 2018 04:36:14 +0100 (CET) Received: (qmail 75942 invoked by uid 500); 21 Jan 2018 03:36:13 -0000 Mailing-List: contact yarn-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list yarn-issues@hadoop.apache.org Received: (qmail 75931 invoked by uid 99); 21 Jan 2018 03:36:13 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 21 Jan 2018 03:36:13 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id E09D5C2165 for ; Sun, 21 Jan 2018 03:36:12 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -107.911 X-Spam-Level: X-Spam-Status: No, score=-107.911 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, KAM_ASCII_DIVIDERS=0.8, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id fk5rfEKkjeRn for ; Sun, 21 Jan 2018 03:36:11 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 04B8C5F23E for ; Sun, 21 Jan 2018 03:36:11 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id C1066E0E80 for ; Sun, 21 Jan 2018 03:36:06 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id B30BD21207 for ; Sun, 21 Jan 2018 03:36:02 +0000 (UTC) Date: Sun, 21 Jan 2018 03:36:02 +0000 (UTC) From: "genericqa (JIRA)" To: yarn-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (YARN-7176) Similar to YARN-2387:Resource Manager crashes with NPE due to lack of synchronization MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/YARN-7176?page=3Dcom.atlassian.= jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D16333= 385#comment-16333385 ]=20 genericqa commented on YARN-7176: --------------------------------- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s= {color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0= m 0s{color} | {color:green} The patch does not contain any @author tags. {= color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m = 0s{color} | {color:red} The patch doesn't appear to include any new or modi= fied tests. Please justify why no new tests are needed for this patch. Also= please list what manual steps were performed to verify this patch. {color}= | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}= 15m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0= m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}= 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0= m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:gree= n} 9m 17s{color} | {color:green} branch has no errors when building and te= sting our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} = 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0= m 35s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}= 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0= m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m = 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}= 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0= m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green}= 0m 0s{color} | {color:green} The patch has no whitespace issues. {color}= | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:gree= n} 9m 36s{color} | {color:green} patch has no errors when building and tes= ting our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 15s= {color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common ge= nerated 1 new + 0 unchanged - 0 fixed =3D 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0= m 33s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 5= 9s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green}= 0m 22s{color} | {color:green} The patch does not generate ASF License war= nings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 44m 23s{colo= r} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common | | | Inconsistent synchronization of org.apache.hadoop.yarn.api.records.im= pl.pb.ContainerLaunchContextPBImpl.builder; locked 60% of time Unsynchroni= zed access at ContainerLaunchContextPBImpl.java:60% of time Unsynchronized= access at ContainerLaunchContextPBImpl.java:[line 328] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=3D17.05.0-ce Server=3D17.05.0-ce Image:yetus/hadoop:5b986= 39 | | JIRA Issue | YARN-7176 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/1290698= 4/YARN-7176.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsit= e unit shadedclient findbugs checkstyle | | uname | Linux f5c3ec986721 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:= 50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 836643d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/19361/artif= act/out/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.htm= l | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/19361/= testReport/ | | Max. process+thread count | 409 (vs. ulimit of 5000) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: hadoop= -yarn-project/hadoop-yarn/hadoop-yarn-common | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/19361= /console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Similar to YARN-2387:Resource Manager crashes with NPE due to lack of syn= chronization > -------------------------------------------------------------------------= ------------ > > Key: YARN-7176 > URL: https://issues.apache.org/jira/browse/YARN-7176 > Project: Hadoop YARN > Issue Type: Bug > Components: RM > Affects Versions: 2.6.0 > Reporter: lujie > Assignee: lujie > Priority: Blocker > Attachments: YARN-7176.patch, logs.rar > > > submit a job, when the job is starting Containers , send kill command.=C2= =A0 After RM receive kill command, it will perform state store. > the start container process and state store(eg.FileStateStore) will call = the same method=C2=A0=C2=A0ContainerLaunchContextPBImpl.getProto which lack= of the=C2=A0synchronization, the RM log will show below.=C2=A0 > {code:java} > 2017-09-08 02:34:37,967 INFO org.apache.hadoop.yarn.server.resourcemanage= r.amlauncher.AMLauncher: Error launching appattempt_1504809243340_0001_0000= 01. Got exception: java.lang.ArrayIndexOutOfBoundsException: 3 > =09at java.util.ArrayList.add(ArrayList.java:441) > =09at com.google.protobuf.AbstractMessageLite$Builder.addAll(AbstractMess= ageLite.java:330) > =09at org.apache.hadoop.yarn.proto.YarnProtos$ContainerLaunchContextProto= $Builder.addAllApplicationACLs(YarnProtos.java:39956) > =09at org.apache.hadoop.yarn.api.records.impl.pb.ContainerLaunchContextPB= Impl.addApplicationACLs(ContainerLaunchContextPBImpl.java:446) > =09at org.apache.hadoop.yarn.api.records.impl.pb.ContainerLaunchContextPB= Impl.mergeLocalToBuilder(ContainerLaunchContextPBImpl.java:121) > =09at org.apache.hadoop.yarn.api.records.impl.pb.ContainerLaunchContextPB= Impl.mergeLocalToProto(ContainerLaunchContextPBImpl.java:128) > =09at org.apache.hadoop.yarn.api.records.impl.pb.ContainerLaunchContextPB= Impl.getProto(ContainerLaunchContextPBImpl.java:70) > =09at org.apache.hadoop.yarn.api.protocolrecords.impl.pb.StartContainerRe= questPBImpl.convertToProtoFormat(StartContainerRequestPBImpl.java:156) > =09at org.apache.hadoop.yarn.api.protocolrecords.impl.pb.StartContainerRe= questPBImpl.mergeLocalToBuilder(StartContainerRequestPBImpl.java:85) > =09at org.apache.hadoop.yarn.api.protocolrecords.impl.pb.StartContainerRe= questPBImpl.mergeLocalToProto(StartContainerRequestPBImpl.java:95) > =09at org.apache.hadoop.yarn.api.protocolrecords.impl.pb.StartContainerRe= questPBImpl.getProto(StartContainerRequestPBImpl.java:57) > =09at org.apache.hadoop.yarn.api.protocolrecords.impl.pb.StartContainersR= equestPBImpl.convertToProtoFormat(StartContainersRequestPBImpl.java:137) > =09at org.apache.hadoop.yarn.api.protocolrecords.impl.pb.StartContainersR= equestPBImpl.addLocalRequestsToProto(StartContainersRequestPBImpl.java:97) > =09at org.apache.hadoop.yarn.api.protocolrecords.impl.pb.StartContainersR= equestPBImpl.mergeLocalToBuilder(StartContainersRequestPBImpl.java:79) > =09at org.apache.hadoop.yarn.api.protocolrecords.impl.pb.StartContainersR= equestPBImpl.mergeLocalToProto(StartContainersRequestPBImpl.java:72) > =09at org.apache.hadoop.yarn.api.protocolrecords.impl.pb.StartContainersR= equestPBImpl.getProto(StartContainersRequestPBImpl.java:48) > =09at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtoc= olPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java= :93) > =09at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher= .launch(AMLauncher.java:119) > =09at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher= .run(AMLauncher.java:254) > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1145) > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:615) > =09at java.lang.Thread.run(Thread.java:745) > 2017-09-08 02:34:37,968 ERROR org.apache.hadoop.yarn.server.resourcemanag= er.recovery.RMStateStore: Error updating app: application_1504809243340_000= 1 > java.lang.NullPointerException > =09at com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(Coded= OutputStream.java:749) > =09at com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutpu= tStream.java:530) > =09at org.apache.hadoop.yarn.proto.YarnProtos$ContainerLaunchContextProto= .getSerializedSize(YarnProtos.java:38512) > =09at com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(Coded= OutputStream.java:749) > =09at com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutpu= tStream.java:530) > =09at org.apache.hadoop.yarn.proto.YarnProtos$ApplicationSubmissionContex= tProto.getSerializedSize(YarnProtos.java:28481) > =09at com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(Coded= OutputStream.java:749) > =09at com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutpu= tStream.java:530) > =09at org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProto= s$ApplicationStateDataProto.getSerializedSize(YarnServerResourceManagerReco= veryProtos.java:816) > =09at com.google.protobuf.AbstractMessageLite.toByteArray(AbstractMessage= Lite.java:62) > =09at org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRM= StateStore.updateApplicationStateInternal(FileSystemRMStateStore.java:426) > =09at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore= $UpdateAppTransition.transition(RMStateStore.java:163) > =09at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore= $UpdateAppTransition.transition(RMStateStore.java:148) > =09at org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.= doTransition(StateMachineFactory.java:362) > =09at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(State= MachineFactory.java:302) > =09at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMa= chineFactory.java:46) > =09at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachi= ne.doTransition(StateMachineFactory.java:448) > =09at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore= .handleStoreEvent(RMStateStore.java:810) > =09at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore= $ForwardingEventHandler.handle(RMStateStore.java:864) > =09at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore= $ForwardingEventHandler.handle(RMStateStore.java:859) > =09at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatch= er.java:173) > =09at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.= java:106) > =09at java.lang.Thread.run(Thread.java:745) > 2017-09-08 02:34:37,978 FATAL org.apache.hadoop.yarn.server.resourcemanag= er.ResourceManager: Received a org.apache.hadoop.yarn.server.resourcemanage= r.RMFatalEvent of type STATE_STORE_OP_FAILED. Cause: > java.lang.NullPointerException > =09at com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(Coded= OutputStream.java:749) > =09at com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutpu= tStream.java:530) > =09at org.apache.hadoop.yarn.proto.YarnProtos$ContainerLaunchContextProto= .getSerializedSize(YarnProtos.java:38512) > =09at com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(Coded= OutputStream.java:749) > =09at com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutpu= tStream.java:530) > =09at org.apache.hadoop.yarn.proto.YarnProtos$ApplicationSubmissionContex= tProto.getSerializedSize(YarnProtos.java:28481) > =09at com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(Coded= OutputStream.java:749) > =09at com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutpu= tStream.java:530) > =09at org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProto= s$ApplicationStateDataProto.getSerializedSize(YarnServerResourceManagerReco= veryProtos.java:816) > =09at com.google.protobuf.AbstractMessageLite.toByteArray(AbstractMessage= Lite.java:62) > =09at org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRM= StateStore.updateApplicationStateInternal(FileSystemRMStateStore.java:426) > =09at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore= $UpdateAppTransition.transition(RMStateStore.java:163) > =09at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore= $UpdateAppTransition.transition(RMStateStore.java:148) > =09at org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.= doTransition(StateMachineFactory.java:362) > =09at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(State= MachineFactory.java:302) > =09at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMa= chineFactory.java:46) > =09at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachi= ne.doTransition(StateMachineFactory.java:448) > =09at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore= .handleStoreEvent(RMStateStore.java:810) > =09at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore= $ForwardingEventHandler.handle(RMStateStore.java:864) > =09at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore= $ForwardingEventHandler.handle(RMStateStore.java:859) > =09at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatch= er.java:173) > =09at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.= java:106) > =09at java.lang.Thread.run(Thread.java:745) > 2017-09-08 02:34:37,987 INFO org.apache.hadoop.yarn.server.resourcemanage= r.rmcontainer.RMContainerImpl: container_1504809243340_0001_01_000001 Conta= iner Transitioned from ACQUIRED to KILLED > 2017-09-08 02:34:37,987 INFO org.apache.hadoop.yarn.server.resourcemanage= r.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_15= 04809243340_0001_01_000001 in state: KILLED event:KILL > 2017-09-08 02:34:37,987 INFO org.apache.hadoop.yarn.server.resourcemanage= r.RMAuditLogger: USER=3Dhires=09OPERATION=3DAM Released Container=09TARGET= =3DSchedulerApp=09RESULT=3DSUCCESS=09APPID=3Dapplication_1504809243340_0001= =09CONTAINERID=3Dcontainer_1504809243340_0001_01_000001 > 2017-09-08 02:34:37,988 INFO org.apache.hadoop.yarn.server.resourcemanage= r.scheduler.SchedulerNode: Released container container_1504809243340_0001_= 01_000001 of capacity on host hadoop11:45454, which= currently has 0 containers, used and available, release resources=3Dtrue > 2017-09-08 02:34:37,988 INFO org.apache.hadoop.util.ExitUtil: Exiting wit= h status 1 > 2017-09-08 02:34:37,988 INFO org.apache.hadoop.yarn.server.resourcemanage= r.scheduler.capacity.LeafQueue: default used=3D numCont= ainers=3D0 user=3Dhires user-resources=3D > 2017-09-08 02:34:37,989 INFO org.apache.hadoop.yarn.server.resourcemanage= r.scheduler.capacity.LeafQueue: completedContainer container=3DContainer: [= ContainerId: container_1504809243340_0001_01_000001, NodeId: hadoop11:45454= , NodeHttpAddress: hadoop11:8042, Resource: , Priori= ty: 0, Token: Token { kind: ContainerToken, service: 10.3.1.11:45454 }, ] q= ueue=3Ddefault: capacity=3D1.0, absoluteCapacity=3D1.0, usedResources=3D, usedCapacity=3D0.0, absoluteUsedCapacity=3D0.0, numApps= =3D1, numContainers=3D0 cluster=3D > 2017-09-08 02:34:37,989 INFO org.apache.hadoop.yarn.server.resourcemanage= r.scheduler.capacity.ParentQueue: completedContainer queue=3Droot usedCapac= ity=3D0.0 absoluteUsedCapacity=3D0.0 used=3D cluster=3D= > 2017-09-08 02:34:37,990 INFO org.apache.hadoop.yarn.server.resourcemanage= r.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default = stats: default: capacity=3D1.0, absoluteCapacity=3D1.0, usedResources=3D, usedCapacity=3D0.0, absoluteUsedCapacity=3D0.0, numApps= =3D1, numContainers=3D0 > 2017-09-08 02:34:37,990 INFO org.apache.hadoop.yarn.server.resourcemanage= r.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1504= 809243340_0001_000001 released container container_1504809243340_0001_01_00= 0001 on node: host: hadoop11:45454 #containers=3D0 available=3D8096 used=3D= 0 with event: KILL > 2017-09-08 02:34:37,990 INFO org.apache.hadoop.yarn.server.resourcemanage= r.scheduler.AppSchedulingInfo: Application application_1504809243340_0001 r= equests cleared > 2017-09-08 02:34:37,990 INFO org.apache.hadoop.yarn.server.resourcemanage= r.scheduler.capacity.LeafQueue: Application removed - appId: application_15= 04809243340_0001 user: hires queue: default #user-pending-applications: 0 #= user-active-applications: 0 #queue-pending-applications: 0 #queue-active-ap= plications: 0 > 2017-09-08 02:34:38,001 ERROR org.apache.hadoop.security.token.delegation= .AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.la= ng.InterruptedException: sleep interrupted > 2017-09-08 02:34:38,005 INFO org.mortbay.log: Stopped HttpServer2$SelectC= hannelConnectorWithSafeStartup@hadoop11:8088 > 2017-09-08 02:34:38,005 ERROR org.apache.hadoop.security.token.delegation= .AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.la= ng.InterruptedException: sleep interrupted > 2017-09-08 02:34:38,006 ERROR org.apache.hadoop.security.token.delegation= .AbstractDelegationTokenSecretManager: ExpiredTokenRemover received java.la= ng.InterruptedException: sleep interrupted > 2017-09-08 02:34:38,108 INFO org.apache.hadoop.ipc.Server: Stopping serve= r on 8032 > 2017-09-08 02:34:38,113 INFO org.apache.hadoop.ipc.Server: Stopping IPC S= erver listener on 8032 > 2017-09-08 02:34:38,113 INFO org.apache.hadoop.ipc.Server: Stopping serve= r on 8033 > 2017-09-08 02:34:38,114 INFO org.apache.hadoop.ipc.Server: Stopping IPC S= erver Responder > 2017-09-08 02:34:38,114 INFO org.apache.hadoop.ipc.Server: Stopping IPC S= erver listener on 8033 > 2017-09-08 02:34:38,114 INFO org.apache.hadoop.ipc.Server: Stopping IPC S= erver Responder > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: yarn-issues-help@hadoop.apache.org