Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id AC76918133 for ; Wed, 13 May 2015 04:14:55 +0000 (UTC) Received: (qmail 12746 invoked by uid 500); 13 May 2015 04:14:48 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 12645 invoked by uid 500); 13 May 2015 04:14:48 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 12635 invoked by uid 99); 13 May 2015 04:14:48 -0000 Received: from Unknown (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 May 2015 04:14:48 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id F1F1C1830F7 for ; Wed, 13 May 2015 04:14:47 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.991 X-Spam-Level: ** X-Spam-Status: No, score=2.991 tagged_above=-999 required=6.31 tests=[HTML_MESSAGE=3, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, URIBL_BLOCKED=0.001, WEIRD_PORT=0.001] autolearn=disabled Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id MqKnBs2fOD2C for ; Wed, 13 May 2015 04:14:19 +0000 (UTC) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [58.251.152.64]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id AC12845419 for ; Wed, 13 May 2015 04:14:14 +0000 (UTC) Received: from 172.24.2.119 (EHLO szxeml432-hub.china.huawei.com) ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CNQ16168; Wed, 13 May 2015 12:14:02 +0800 (CST) Received: from SZXEML512-MBS.china.huawei.com ([169.254.8.253]) by szxeml432-hub.china.huawei.com ([10.82.67.209]) with mapi id 14.03.0158.001; Wed, 13 May 2015 12:13:57 +0800 From: Rohith Sharma K S To: "user@hadoop.apache.org" Subject: RE: spark job hangs/fails with localizer failing Thread-Topic: spark job hangs/fails with localizer failing Thread-Index: AdCNC8Ny2b3/t8KxQWaQTb4NyvT1TgAJkGNA Date: Wed, 13 May 2015 04:13:57 +0000 Message-ID: <0EE80F6F7A98A64EBD18F2BE839C911567800F0D@szxeml512-mbs.china.huawei.com> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: x-originating-ip: [10.18.215.233] Content-Type: multipart/related; boundary="_004_0EE80F6F7A98A64EBD18F2BE839C911567800F0Dszxeml512mbschi_"; type="multipart/alternative" MIME-Version: 1.0 X-CFilter-Loop: Reflected --_004_0EE80F6F7A98A64EBD18F2BE839C911567800F0Dszxeml512mbschi_ Content-Type: multipart/alternative; boundary="_000_0EE80F6F7A98A64EBD18F2BE839C911567800F0Dszxeml512mbschi_" --_000_0EE80F6F7A98A64EBD18F2BE839C911567800F0Dszxeml512mbschi_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hi, This looks like permission issue in secure mode, Path /tmp/hadoop-yarn/nm-local-dir/usercache/testuser/appcache/application_= 1431466075462_0001 does not have desired permission. Would you confirm the directories permissions as per below http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Sec= ureMode.html#Configuration Thanks & Regards Rohith Sharma K S From: Nance, Keith [mailto:knance@smartronix.com] Sent: 13 May 2015 05:07 To: user@hadoop.apache.org Subject: spark job hangs/fails with localizer failing At wits end...unable to get a simple Spark Pi application to run on a Secur= ed Yarn cluster. Help is MUCH appreciated. Below are the log entries for the Spark Job, Node Manager, and Resource Man= ager. ###: SPARK USER/JOB :### [testuser@ip-10-10-127-10 spark]$ ./bin/spark-submit --verbose --class org.= apache.spark.examples.SparkPi --master yarn-client --num-executors 1 --exec= utor-cores 1 lib/spark-examples*.jar 10 /home/testuser/spark/conf/spark-env.sh: line 54: -Dspark.history.kerberos.p= rincipal=3Dspark/ip-10-10-127-10.ec2.internal@MALARD.LOCAL: No such file or directory Using properties file: /home/testuser/spark/conf/spark-defaults.conf Adding default property: spark.serializer=3Dorg.apache.spark.serializer.Kry= oSerializer Adding default property: spark.executor.extraJavaOptions=3D-XX:+PrintGCDeta= ils -Dkey=3Dvalue -Dnumbers=3D"one two three" Adding default property: spark.yarn.access.namenodes=3Dhdfs://10.10.10.10:8= 020 Adding default property: spark.logConf=3Dtrue Adding default property: spark.eventlog.dir=3Dhdfs://10.10.10.10:8020/user/= testuser/spark/eventlog Adding default property: spark.master=3Dyarn-client Adding default property: spark.authenticate=3Dtrue Adding default property: spark.eventlog.enabled=3Dtrue Parsed arguments: master yarn-client deployMode null executorMemory null executorCores 1 totalExecutorCores null propertiesFile /home/testuser/spark/conf/spark-defaults.conf driverMemory null driverCores null driverExtraClassPath null driverExtraLibraryPath null driverExtraJavaOptions null supervise false queue null numExecutors 1 files null pyFiles null archives null mainClass org.apache.spark.examples.SparkPi primaryResource file:/home/testuser/spark/lib/spark-examples-1.3.= 1-hadoop2.6.0.jar name org.apache.spark.examples.SparkPi childArgs [10] jars null packages null repositories null verbose true Spark properties used, including those specified through --conf and those from the properties file /home/testuser/spark/conf/spark-d= efaults.conf: spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020 spark.logConf -> true spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlo= g spark.authenticate -> true spark.serializer -> org.apache.spark.serializer.KryoSerializer spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=3Dvalue -Dnu= mbers=3D"one two three" spark.master -> yarn-client spark.eventlog.enabled -> true Main class: org.apache.spark.examples.SparkPi Arguments: 10 System properties: spark.yarn.access.namenodes -> hdfs://10.10.10.10:8020 spark.executor.instances -> 1 spark.logConf -> true spark.eventlog.dir -> hdfs://10.10.10.10:8020/user/testuser/spark/eventlog spark.authenticate -> true SPARK_SUBMIT -> true spark.serializer -> org.apache.spark.serializer.KryoSerializer spark.executor.extraJavaOptions -> -XX:+PrintGCDetails -Dkey=3Dvalue -Dnumb= ers=3D"one two three" spark.app.name -> org.apache.spark.examples.SparkPi spark.jars -> file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.= 0.jar spark.master -> yarn-client spark.executor.cores -> 1 spark.eventlog.enabled -> true Classpath elements: file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar 15/05/12 21:29:03 INFO spark.SparkContext: Running Spark version 1.3.1 15/05/12 21:29:03 INFO spark.SparkContext: Spark configuration: spark.app.name=3DSpark Pi spark.authenticate=3Dtrue spark.eventlog.dir=3Dhdfs://10.10.10.10:8020/user/testuser/spark/eventlog spark.eventlog.enabled=3Dtrue spark.executor.cores=3D1 spark.executor.extraJavaOptions=3D-XX:+PrintGCDetails -Dkey=3Dvalue -Dnumbe= rs=3D"one two three" spark.executor.instances=3D1 spark.jars=3Dfile:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0= .jar spark.logConf=3Dtrue spark.master=3Dyarn-client spark.serializer=3Dorg.apache.spark.serializer.KryoSerializer spark.yarn.access.namenodes=3Dhdfs://10.10.10.10:8020 15/05/12 21:29:04 WARN util.NativeCodeLoader: Unable to load native-hadoop = library for your platform... using builtin-java classes where applicable 15/05/12 21:29:04 INFO spark.SecurityManager: Changing view acls to: testus= er 15/05/12 21:29:04 INFO spark.SecurityManager: Changing modify acls to: test= user 15/05/12 21:29:05 INFO spark.SecurityManager: adding secret to credentials = in yarn mode 15/05/12 21:29:05 INFO spark.SecurityManager: SecurityManager: authenticati= on enabled; ui acls disabled; users with view permissions: Set(testuser); u= sers with modify permissions: Set(testuser) 15/05/12 21:29:07 INFO slf4j.Slf4jLogger: Slf4jLogger started 15/05/12 21:29:07 INFO Remoting: Starting remoting 15/05/12 21:29:07 INFO util.Utils: Successfully started service 'sparkDrive= r' on port 53747. 15/05/12 21:29:07 INFO Remoting: Remoting started; listening on addresses := [akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal:53747] 15/05/12 21:29:07 INFO spark.SparkEnv: Registering MapOutputTracker 15/05/12 21:29:07 INFO spark.SparkEnv: Registering BlockManagerMaster 15/05/12 21:29:07 INFO storage.DiskBlockManager: Created local directory at= /tmp/spark-0877f4af-cae7-4d4c-b9f3-434712e8a654/blockmgr-f202f0b4-3842-40f= 5-934f-43fd764e641a 15/05/12 21:29:07 INFO storage.MemoryStore: MemoryStore started with capaci= ty 267.3 MB 15/05/12 21:29:08 INFO spark.HttpFileServer: HTTP File server directory is = /tmp/spark-20236d03-2f2c-4619-bd6b-859fbd4b18b9/httpd-3ac7254f-3216-4181-83= 23-bb7493bfea2a 15/05/12 21:29:08 INFO spark.HttpServer: Starting HTTP Server 15/05/12 21:29:08 INFO server.Server: jetty-8.y.z-SNAPSHOT 15/05/12 21:29:08 INFO server.AbstractConnector: Started SocketConnector@0.= 0.0.0:37326 15/05/12 21:29:08 INFO util.Utils: Successfully started service 'HTTP file = server' on port 37326. 15/05/12 21:29:08 INFO spark.SparkEnv: Registering OutputCommitCoordinator 15/05/12 21:29:08 INFO server.Server: jetty-8.y.z-SNAPSHOT 15/05/12 21:29:08 INFO server.AbstractConnector: Started SelectChannelConne= ctor@0.0.0.0:4040 15/05/12 21:29:08 INFO util.Utils: Successfully started service 'SparkUI' o= n port 4040. 15/05/12 21:29:08 INFO ui.SparkUI: Started SparkUI at http://ip-10-10-127-1= 0.ec2.internal:4040 15/05/12 21:29:09 INFO spark.SparkContext: Added JAR file:/home/testuser/sp= ark/lib/spark-examples-1.3.1-hadoop2.6.0.jar at http://10.10.127.10:37326/j= ars/spark-examples-1.3.1-hadoop2.6.0.jar with timestamp 1431466149854 15/05/12 21:29:10 INFO client.RMProxy: Connecting to ResourceManager at ip-= 10-10-127-10.ec2.internal/10.10.127.10:8032 15/05/12 21:29:11 INFO yarn.Client: Requesting a new application from clust= er with 1 NodeManagers 15/05/12 21:29:11 INFO yarn.Client: Verifying our application has not reque= sted more than the maximum memory capability of the cluster (8192 MB per co= ntainer) 15/05/12 21:29:11 INFO yarn.Client: Will allocate AM container, with 896 MB= memory including 384 MB overhead 15/05/12 21:29:11 INFO yarn.Client: Setting up container launch context for= our AM 15/05/12 21:29:11 INFO yarn.Client: Preparing resources for our AM containe= r 15/05/12 21:29:13 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token = 16 for testuser on 10.10.10.10:8020 15/05/12 21:29:13 INFO yarn.Client: Uploading resource file:/home/testuser/= spark/lib/spark-assembly-1.3.1-hadoop2.6.0.jar -> hdfs://10.10.10.10:8020/u= ser/testuser/.sparkStaging/application_1431466075462_0001/spark-assembly-1.= 3.1-hadoop2.6.0.jar 15/05/12 21:29:18 INFO yarn.Client: Setting up the launch environment for o= ur AM container 15/05/12 21:29:18 INFO spark.SecurityManager: Changing view acls to: testus= er 15/05/12 21:29:18 INFO spark.SecurityManager: Changing modify acls to: test= user 15/05/12 21:29:18 INFO spark.SecurityManager: SecurityManager: authenticati= on enabled; ui acls disabled; users with view permissions: Set(testuser); u= sers with modify permissions: Set(testuser) 15/05/12 21:29:18 INFO yarn.Client: Submitting application 1 to ResourceMan= ager 15/05/12 21:29:21 INFO impl.YarnClientImpl: Submitted application applicati= on_1431466075462_0001 15/05/12 21:29:22 INFO yarn.Client: Application report for application_1431= 466075462_0001 (state: ACCEPTED) 15/05/12 21:29:22 INFO yarn.Client: client token: Token { kind: YARN_CLIENT_TOKEN, service: } diagnostics: N/A ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start time: 1431466159129 final status: UNDEFINED tracking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/appl= ication_1431466075462_0001/ user: testuser 15/05/12 21:29:23 INFO yarn.Client: Application report for application_1431= 466075462_0001 (state: ACCEPTED) 15/05/12 21:29:24 INFO yarn.Client: Application report for application_1431= 466075462_0001 (state: ACCEPTED) 15/05/12 21:29:25 INFO yarn.Client: Application report for application_1431= 466075462_0001 (state: ACCEPTED) ...(TRUNCATED)... 15/05/12 21:41:13 INFO yarn.Client: Application report for application_1431= 466075462_0001 (state: ACCEPTED) 15/05/12 21:41:14 INFO yarn.Client: Application report for application_1431= 466075462_0001 (state: ACCEPTED) 15/05/12 21:41:15 INFO yarn.Client: Application report for application_1431= 466075462_0001 (state: ACCEPTED) 15/05/12 21:41:16 INFO yarn.Client: Application report for application_1431= 466075462_0001 (state: FAILED) 15/05/12 21:41:16 INFO yarn.Client: client token: N/A diagnostics: Application application_1431466075462_0001 failed 2 t= imes due to AM Container for appattempt_1431466075462_0001_000002 exited wi= th exitCode: -1000 For more detailed output, check application tracking page:https://ip-10-10-= 127-10.ec2.internal:8090/proxy/application_1431466075462_0001/Then, click o= n links to logs of each attempt. Diagnostics: Application application_1431466075462_0001 initialization fail= ed (exitCode=3D255) with output: main : command provided 0 main : user is testuser main : requested yarn user is testuser Path /tmp/hadoop-yarn/nm-local-dir/usercache/testuser/appcache/application_= 1431466075462_0001 does not have desired permission. Did not create any app directories Failing this attempt. Failing the application. ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start time: 1431466159129 final status: FAILED tracking URL: https://ip-10-10-127-10.ec2.internal:8090/cluster/ap= p/application_1431466075462_0001 user: testuser Exception in thread "main" org.apache.spark.SparkException: Yarn applicatio= n has already ended! It might have been killed or unable to launch applicat= ion master. at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.wa= itForApplication(YarnClientSchedulerBackend.scala:113) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.st= art(YarnClientSchedulerBackend.scala:59) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskScheduler= Impl.scala:141) at org.apache.spark.SparkContext.(SparkContext.scala:381) at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:28) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessor= Impl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethod= AccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$Spa= rkSubmit$$runMain(SparkSubmit.scala:569) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.sca= la:166) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:18= 9) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) [testuser@ip-10-10-127-10 spark]$ ####: NODEMANAGER :#### 2015-05-12 21:29:22,237 INFO SecurityLogger.org.apache.hadoop.ipc.Server: A= uth successful for appattempt_1431466075462_0001_000001 (auth:SIMPLE) 2015-05-12 21:29:22,408 WARN org.apache.hadoop.security.ShellBasedUnixGroup= sMapping: got exception trying to get groups for user appattempt_1431466075= 462_0001_000001: id: appattempt_1431466075462_0001_000001: no such user 2015-05-12 21:29:22,409 WARN org.apache.hadoop.security.UserGroupInformatio= n: No groups available for user appattempt_1431466075462_0001_000001 2015-05-12 21:29:22,409 INFO SecurityLogger.org.apache.hadoop.security.auth= orize.ServiceAuthorizationManager: Authorization successful for appattempt_= 1431466075462_0001_000001 (auth:TOKEN) for protocol=3Dinterface org.apache.= hadoop.yarn.api.ContainerManagementProtocolPB 2015-05-12 21:29:22,570 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.ContainerManagerImpl: Start request for container_143146607546= 2_0001_01_000001 by user testuser 2015-05-12 21:29:22,648 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.ContainerManagerImpl: Creating a new application reference for= app application_1431466075462_0001 2015-05-12 21:29:22,663 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.application.Application: Application application_1431466075462= _0001 transitioned from NEW to INITING 2015-05-12 21:29:22,679 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.application.Application: Application application_1431466075462= _0001 transitioned from INITING to RUNNING 2015-05-12 21:29:22,680 INFO org.apache.hadoop.yarn.server.nodemanager.NMAu= ditLogger: USER=3Dtestuser IP=3D10.10.127.10 OPERATION=3DStart Containe= r Request TARGET=3DContainerManageImpl RESULT=3DSUCCESS APPID=3Dapp= lication_1431466075462_0001 CONTAINERID=3Dcontainer_1431466075462_0001_0= 1_000001 2015-05-12 21:29:22,689 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.application.Application: Adding container_1431466075462_0001_0= 1_000001 to application application_1431466075462_0001 2015-05-12 21:29:22,707 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.container.Container: Container container_1431466075462_0001_01= _000001 transitioned from NEW to LOCALIZING 2015-05-12 21:29:22,707 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.AuxServices: Got event CONTAINER_INIT for appId application_14= 31466075462_0001 2015-05-12 21:29:22,773 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.localizer.LocalizedResource: Resource hdfs://10.10.10.10:8020/= user/testuser/.sparkStaging/application_1431466075462_0001/spark-assembly-1= .3.1-hadoop2.6.0.jar transitioned from INIT to DOWNLOADING 2015-05-12 21:29:22,773 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.localizer.ResourceLocalizationService: Created localizer for c= ontainer_1431466075462_0001_01_000001 2015-05-12 21:29:23,306 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.localizer.ResourceLocalizationService: Writing credentials to = the nmPrivate file /tmp/hadoop-yarn/nm-local-dir/nmPrivate/container_143146= 6075462_0001_01_000001.tokens. Credentials list: 2015-05-12 21:41:15,528 INFO SecurityLogger.org.apache.hadoop.ipc.Server: A= uth successful for appattempt_1431466075462_0001_000001 (auth:SIMPLE) 2015-05-12 21:41:15,535 WARN org.apache.hadoop.security.ShellBasedUnixGroup= sMapping: got exception trying to get groups for user appattempt_1431466075= 462_0001_000001: id: appattempt_1431466075462_0001_000001: no such user 2015-05-12 21:41:15,536 WARN org.apache.hadoop.security.UserGroupInformatio= n: No groups available for user appattempt_1431466075462_0001_000001 2015-05-12 21:41:15,536 INFO SecurityLogger.org.apache.hadoop.security.auth= orize.ServiceAuthorizationManager: Authorization successful for appattempt_= 1431466075462_0001_000001 (auth:TOKEN) for protocol=3Dinterface org.apache.= hadoop.yarn.api.ContainerManagementProtocolPB 2015-05-12 21:41:15,539 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.ContainerManagerImpl: Stopping container with container Id: co= ntainer_1431466075462_0001_01_000001 2015-05-12 21:41:15,539 INFO org.apache.hadoop.yarn.server.nodemanager.NMAu= ditLogger: USER=3Dtestuser IP=3D10.10.127.10 OPERATION=3DStop Container= Request TARGET=3DContainerManageImpl RESULT=3DSUCCESS APPID=3Dapp= lication_1431466075462_0001 CONTAINERID=3Dcontainer_1431466075462_0001_0= 1_000001 2015-05-12 21:41:15,566 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.container.Container: Container container_1431466075462_0001_01= _000001 transitioned from LOCALIZING to KILLING 2015-05-12 21:41:15,568 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.container.Container: Container container_1431466075462_0001_01= _000001 transitioned from KILLING to DONE 2015-05-12 21:41:15,568 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.application.Application: Removing container_1431466075462_0001= _01_000001 from application application_1431466075462_0001 2015-05-12 21:41:15,569 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.AuxServices: Got event CONTAINER_STOP for appId application_14= 31466075462_0001 2015-05-12 21:41:15,594 INFO SecurityLogger.org.apache.hadoop.ipc.Server: A= uth successful for appattempt_1431466075462_0001_000002 (auth:SIMPLE) 2015-05-12 21:41:15,602 WARN org.apache.hadoop.security.ShellBasedUnixGroup= sMapping: got exception trying to get groups for user appattempt_1431466075= 462_0001_000002: id: appattempt_1431466075462_0001_000002: no such user 2015-05-12 21:41:15,602 WARN org.apache.hadoop.security.UserGroupInformatio= n: No groups available for user appattempt_1431466075462_0001_000002 2015-05-12 21:41:15,603 INFO SecurityLogger.org.apache.hadoop.security.auth= orize.ServiceAuthorizationManager: Authorization successful for appattempt_= 1431466075462_0001_000002 (auth:TOKEN) for protocol=3Dinterface org.apache.= hadoop.yarn.api.ContainerManagementProtocolPB 2015-05-12 21:41:15,604 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.ContainerManagerImpl: Start request for container_143146607546= 2_0001_02_000001 by user testuser 2015-05-12 21:41:15,604 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.application.Application: Adding container_1431466075462_0001_0= 2_000001 to application application_1431466075462_0001 2015-05-12 21:41:15,605 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.container.Container: Container container_1431466075462_0001_02= _000001 transitioned from NEW to LOCALIZING 2015-05-12 21:41:15,605 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.AuxServices: Got event CONTAINER_INIT for appId application_14= 31466075462_0001 2015-05-12 21:41:15,605 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.localizer.ResourceLocalizationService: Created localizer for c= ontainer_1431466075462_0001_02_000001 2015-05-12 21:41:15,604 INFO org.apache.hadoop.yarn.server.nodemanager.NMAu= ditLogger: USER=3Dtestuser IP=3D10.10.127.10 OPERATION=3DStart Containe= r Request TARGET=3DContainerManageImpl RESULT=3DSUCCESS APPID=3Dapp= lication_1431466075462_0001 CONTAINERID=3Dcontainer_1431466075462_0001_0= 2_000001 2015-05-12 21:41:15,620 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.localizer.ResourceLocalizationService: Writing credentials to = the nmPrivate file /tmp/hadoop-yarn/nm-local-dir/nmPrivate/container_143146= 6075462_0001_02_000001.tokens. Credentials list: 2015-05-12 21:41:15,672 WARN org.apache.hadoop.yarn.server.nodemanager.Linu= xContainerExecutor: Exit code from container container_1431466075462_0001_0= 2_000001 startLocalizer is : 255 ExitCodeException exitCode=3D255: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.= java:715) at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor= .startLocalizer(LinuxContainerExecutor.java:232) at org.apache.hadoop.yarn.server.nodemanager.containermanager.local= izer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationSe= rvice.java:1088) 2015-05-12 21:41:15,673 INFO org.apache.hadoop.yarn.server.nodemanager.Cont= ainerExecutor: main : command provided 0 2015-05-12 21:41:15,673 INFO org.apache.hadoop.yarn.server.nodemanager.Cont= ainerExecutor: main : user is testuser 2015-05-12 21:41:15,673 INFO org.apache.hadoop.yarn.server.nodemanager.Cont= ainerExecutor: main : requested yarn user is testuser 2015-05-12 21:41:15,673 INFO org.apache.hadoop.yarn.server.nodemanager.Cont= ainerExecutor: Path /tmp/hadoop-yarn/nm-local-dir/usercache/testuser/appcac= he/application_1431466075462_0001 does not have desired permission. 2015-05-12 21:41:15,673 INFO org.apache.hadoop.yarn.server.nodemanager.Cont= ainerExecutor: Did not create any app directories 2015-05-12 21:41:15,673 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.localizer.ResourceLocalizationService: Localizer failed java.io.IOException: Application application_1431466075462_0001 initializat= ion failed (exitCode=3D255) with output: main : command provided 0 main : user is testuser main : requested yarn user is testuser Path /tmp/hadoop-yarn/nm-local-dir/usercache/testuser/appcache/application_= 1431466075462_0001 does not have desired permission. Did not create any app directories at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor= .startLocalizer(LinuxContainerExecutor.java:241) at org.apache.hadoop.yarn.server.nodemanager.containermanager.local= izer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationSe= rvice.java:1088) Caused by: ExitCodeException exitCode=3D255: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.= java:715) at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor= .startLocalizer(LinuxContainerExecutor.java:232) ... 1 more 2015-05-12 21:41:15,674 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.container.Container: Container container_1431466075462_0001_02= _000001 transitioned from LOCALIZING to LOCALIZATION_FAILED 2015-05-12 21:41:15,675 WARN org.apache.hadoop.yarn.server.nodemanager.NMAu= ditLogger: USER=3Dtestuser OPERATION=3DContainer Finished - Failed TA= RGET=3DContainerImpl RESULT=3DFAILURE DESCRIPTION=3DContainer failed wi= th state: LOCALIZATION_FAILED APPID=3Dapplication_1431466075462_0001= CONTAINERID=3Dcontainer_1431466075462_0001_02_000001 2015-05-12 21:41:15,675 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.container.Container: Container container_1431466075462_0001_02= _000001 transitioned from LOCALIZATION_FAILED to DONE 2015-05-12 21:41:15,675 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.application.Application: Removing container_1431466075462_0001= _02_000001 from application application_1431466075462_0001 2015-05-12 21:41:15,675 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.AuxServices: Got event CONTAINER_STOP for appId application_14= 31466075462_0001 2015-05-12 21:41:17,593 INFO org.apache.hadoop.yarn.server.nodemanager.Node= StatusUpdaterImpl: Removed completed containers from NM context: [container= _1431466075462_0001_02_000001] 2015-05-12 21:41:17,594 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.application.Application: Application application_1431466075462= _0001 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP 2015-05-12 21:41:17,594 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.AuxServices: Got event APPLICATION_STOP for appId application_= 1431466075462_0001 2015-05-12 21:41:17,595 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.application.Application: Application application_1431466075462= _0001 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED 2015-05-12 21:41:17,595 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion f= or application: application_1431466075462_0001, with delay of 10800 seconds 2015-05-12 21:41:17,654 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring fo= r container_1431466075462_0001_01_000001 2015-05-12 21:41:17,654 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring fo= r container_1431466075462_0001_02_000001 2015-05-12 21:44:48,514 INFO org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.localizer.ResourceLocalizationService: Localizer failed java.io.IOException: java.lang.InterruptedException at org.apache.hadoop.util.Shell.runCommand(Shell.java:541) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.= java:715) at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor= .startLocalizer(LinuxContainerExecutor.java:232) at org.apache.hadoop.yarn.server.nodemanager.containermanager.local= izer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationSe= rvice.java:1088) 2015-05-12 21:44:48,514 WARN org.apache.hadoop.yarn.server.nodemanager.cont= ainermanager.ContainerManagerImpl: Event EventType: RESOURCE_FAILED sent to= absent container container_1431466075462_0001_01_000001 [yarn@ip-10-10-128-10 hadoop]$ ####: RESOURCEMANAGER :#### 2015-05-12 21:29:11,624 INFO SecurityLogger.org.apache.hadoop.ipc.Server: A= uth successful for testuser@MALARD.LOCAL (aut= h:KERBEROS) 2015-05-12 21:29:11,702 INFO SecurityLogger.org.apache.hadoop.security.auth= orize.ServiceAuthorizationManager: Authorization successful for testuser@MA= LARD.LOCAL (auth:KERBEROS) for protocol=3Dint= erface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2015-05-12 21:29:11,807 INFO org.apache.hadoop.yarn.server.resourcemanager.= ClientRMService: Allocated new applicationId: 1 2015-05-12 21:29:19,129 WARN org.apache.hadoop.yarn.server.resourcemanager.= rmapp.RMAppImpl: The specific max attempts: 0 for application: 1 is invalid= , because it is out of the range [1, 2]. Use the global max attempts instea= d. 2015-05-12 21:29:19,144 INFO org.apache.hadoop.yarn.server.resourcemanager.= ClientRMService: Application with id 1 submitted by user testuser 2015-05-12 21:29:19,145 INFO org.apache.hadoop.yarn.server.resourcemanager.= RMAuditLogger: USER=3Dtestuser IP=3D10.10.127.10 OPERATION=3DSubmit Applica= tion Request TARGET=3DClientRMService RESULT=3DSUCCESS APPID=3Dappl= ication_1431466075462_0001 2015-05-12 21:29:19,535 INFO org.apache.hadoop.yarn.server.resourcemanager.= security.DelegationTokenRenewer: application_1431466075462_0001 found exist= ing hdfs token Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Iden= t: (HDFS_DELEGATION_TOKEN token 16 for testuser) 2015-05-12 21:29:21,558 INFO org.apache.hadoop.yarn.server.resourcemanager.= security.DelegationTokenRenewer: Renewed delegation-token=3D [Kind: HDFS_DE= LEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN to= ken 16 for testuser);exp=3D1431552561549], for application_1431466075462_00= 01 2015-05-12 21:29:21,559 INFO org.apache.hadoop.yarn.server.resourcemanager.= security.DelegationTokenRenewer: Renew Kind: HDFS_DELEGATION_TOKEN, Service= : 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 16 for testuser);ex= p=3D1431552561549 in 86399991 ms, appId =3D application_1431466075462_0001 2015-05-12 21:29:21,559 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.RMAppImpl: Storing application with id application_1431466075462_0001 2015-05-12 21:29:21,561 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.RMAppImpl: application_1431466075462_0001 State change from NEW to NE= W_SAVING 2015-05-12 21:29:21,575 INFO org.apache.hadoop.yarn.server.resourcemanager.= recovery.RMStateStore: Storing info for app: application_1431466075462_0001 2015-05-12 21:29:21,589 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.RMAppImpl: application_1431466075462_0001 State change from NEW_SAVIN= G to SUBMITTED 2015-05-12 21:29:21,591 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.ParentQueue: Application added - appId: application_1431= 466075462_0001 user: testuser leaf-queue of parent: root #applications: 1 2015-05-12 21:29:21,591 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.CapacityScheduler: Accepted application application_1431= 466075462_0001 from user: testuser, in queue: default 2015-05-12 21:29:21,593 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.RMAppImpl: application_1431466075462_0001 State change from SUBMITTED= to ACCEPTED 2015-05-12 21:29:21,647 INFO org.apache.hadoop.yarn.server.resourcemanager.= ApplicationMasterService: Registering app attempt : appattempt_143146607546= 2_0001_000001 2015-05-12 21:29:21,648 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: appattempt_1431466075462_0001_000001 State = change from NEW to SUBMITTED 2015-05-12 21:29:21,671 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.LeafQueue: Application application_1431466075462_0001 fr= om user: testuser activated in queue: default 2015-05-12 21:29:21,671 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.LeafQueue: Application added - appId: application_143146= 6075462_0001 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.= capacity.LeafQueue$User@73e84841, leaf-queue: default= #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-= applications: 0 #queue-active-applications: 1 2015-05-12 21:29:21,671 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_= 1431466075462_0001_000001 to scheduler from user testuser in queue default 2015-05-12 21:29:21,673 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: appattempt_1431466075462_0001_000001 State = change from SUBMITTED to SCHEDULED 2015-05-12 21:29:21,869 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmcontainer.RMContainerImpl: container_1431466075462_0001_01_000001 Contain= er Transitioned from NEW to ALLOCATED 2015-05-12 21:29:21,869 INFO org.apache.hadoop.yarn.server.resourcemanager.= RMAuditLogger: USER=3Dtestuser OPERATION=3DAM Allocated Container TA= RGET=3DSchedulerApp RESULT=3DSUCCESS APPID=3Dapplication_1431466075462= _0001 CONTAINERID=3Dcontainer_1431466075462_0001_01_000001 2015-05-12 21:29:21,870 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.SchedulerNode: Assigned container container_1431466075462_0001_01= _000001 of capacity on host ip-10-10-128-10.ec2.int= ernal:9032, which has 1 containers, used and available after allocation 2015-05-12 21:29:21,870 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.LeafQueue: assignedContainer application attempt=3Dappat= tempt_1431466075462_0001_000001 container=3DContainer: [ContainerId: contai= ner_1431466075462_0001_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032= , NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: , Priority: 0, Token: null, ] queue=3Ddefault: capacity=3D1.0, = absoluteCapacity=3D1.0, usedResources=3D, usedCapacity= =3D0.0, absoluteUsedCapacity=3D0.0, numApps=3D1, numContainers=3D0 clusterR= esource=3D 2015-05-12 21:29:21,870 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default sta= ts: default: capacity=3D1.0, absoluteCapacity=3D1.0, usedResources=3D, usedCapacity=3D0.125, absoluteUsedCapacity=3D0.125, numA= pps=3D1, numContainers=3D1 2015-05-12 21:29:21,870 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.ParentQueue: assignedContainer queue=3Droot usedCapacity= =3D0.125 absoluteUsedCapacity=3D0.125 used=3D cluste= r=3D 2015-05-12 21:29:21,891 INFO org.apache.hadoop.yarn.server.resourcemanager.= security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : ip-10-10-12= 8-10.ec2.internal:9032 for container : container_1431466075462_0001_01_0000= 01 2015-05-12 21:29:21,907 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmcontainer.RMContainerImpl: container_1431466075462_0001_01_000001 Contain= er Transitioned from ALLOCATED to ACQUIRED 2015-05-12 21:29:21,907 INFO org.apache.hadoop.yarn.server.resourcemanager.= security.NMTokenSecretManagerInRM: Clear node set for appattempt_1431466075= 462_0001_000001 2015-05-12 21:29:21,910 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1431466= 075462_0001 AttemptId: appattempt_1431466075462_0001_000001 MasterContainer= : Container: [ContainerId: container_1431466075462_0001_01_000001, NodeId: = ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.int= ernal:8090, Resource: , Priority: 0, Token: Token { = kind: ContainerToken, service: 10.10.128.10:9032 }, ] 2015-05-12 21:29:21,927 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: appattempt_1431466075462_0001_000001 State = change from SCHEDULED to ALLOCATED_SAVING 2015-05-12 21:29:21,942 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: appattempt_1431466075462_0001_000001 State = change from ALLOCATED_SAVING to ALLOCATED 2015-05-12 21:29:21,945 INFO org.apache.hadoop.yarn.server.resourcemanager.= amlauncher.AMLauncher: Launching masterappattempt_1431466075462_0001_000001 2015-05-12 21:29:21,970 INFO org.apache.hadoop.yarn.server.resourcemanager.= amlauncher.AMLauncher: Setting up container Container: [ContainerId: contai= ner_1431466075462_0001_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032= , NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 10= .10.128.10:9032 }, ] for AM appattempt_1431466075462_0001_000001 2015-05-12 21:29:21,970 INFO org.apache.hadoop.yarn.server.resourcemanager.= amlauncher.AMLauncher: Command to launch container container_1431466075462_= 0001_01_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir= =3D{{PWD}}/tmp,'-Dspark.driver.host=3Dip-10-10-127-10.ec2.internal','-Dspar= k.driver.port=3D53747','-Dspark.driver.appUIAddress=3Dhttp://ip-10-10-127-1= 0.ec2.internal:4040','-Dspark.master=3Dyarn-client','-Dspark.fileserver.uri= =3Dhttp://10.10.127.10:37326','-Dspark.executor.extraJavaOptions=3D-XX:+Pri= ntGCDetails -Dkey=3Dvalue -Dnumbers=3D\"one two three\"','-Dspark.yarn.acce= ss.namenodes=3Dhdfs://10.10.10.10:8020','-Dspark.logConf=3Dtrue','-Dspark.s= erializer=3Dorg.apache.spark.serializer.KryoSerializer','-Dspark.executor.i= d=3D','-Dspark.jars=3Dfile:/home/testuser/spark/lib/spark-examples-= 1.3.1-hadoop2.6.0.jar','-Dspark.executor.instances=3D1','-Dspark.app.name= =3DSpark Pi','-Dspark.eventlog.dir=3Dhdfs://10.10.10.10:8020/user/testuser/= spark/eventlog','-Dspark.tachyonStore.folderName=3Dspark-9481ab9a-85db-4bfe= -9d2f-ceb45f31d37c','-Dspark.executor.cores=3D1','-Dspark.eventlog.enabled= =3Dtrue','-Dspark.authenticate=3Dtrue',-Dspark.yarn.app.container.log.dir= =3D,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'ip-10-10-= 127-10.ec2.internal:53747',--executor-memory,1024m,--executor-cores,1,--num= -executors ,1,1>,/stdout,2>,/stderr 2015-05-12 21:29:21,982 INFO org.apache.hadoop.yarn.server.resourcemanager.= security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: a= ppattempt_1431466075462_0001_000001 2015-05-12 21:29:21,985 INFO org.apache.hadoop.yarn.server.resourcemanager.= security.AMRMTokenSecretManager: Creating password for appattempt_143146607= 5462_0001_000001 2015-05-12 21:29:22,710 INFO org.apache.hadoop.yarn.server.resourcemanager.= amlauncher.AMLauncher: Done launching container Container: [ContainerId: co= ntainer_1431466075462_0001_01_000001, NodeId: ip-10-10-128-10.ec2.internal:= 9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service= : 10.10.128.10:9032 }, ] for AM appattempt_1431466075462_0001_000001 2015-05-12 21:29:22,710 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: appattempt_1431466075462_0001_000001 State = change from ALLOCATED to LAUNCHED 2015-05-12 21:29:22,915 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmcontainer.RMContainerImpl: container_1431466075462_0001_01_000001 Contain= er Transitioned from ACQUIRED to RUNNING 2015-05-12 21:37:55,432 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.AbstractYarnScheduler: Release request cache is cleaned up 2015-05-12 21:41:15,504 INFO org.apache.hadoop.yarn.util.AbstractLiveliness= Monitor: Expired:appattempt_1431466075462_0001_000001 Timed out after 600 s= ecs 2015-05-12 21:41:15,505 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_143= 1466075462_0001_000001 with final state: FAILED, and exit status: -1000 2015-05-12 21:41:15,506 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: appattempt_1431466075462_0001_000001 State = change from LAUNCHED to FINAL_SAVING 2015-05-12 21:41:15,506 INFO org.apache.hadoop.yarn.server.resourcemanager.= ApplicationMasterService: Unregistering app attempt : appattempt_1431466075= 462_0001_000001 2015-05-12 21:41:15,507 INFO org.apache.hadoop.yarn.server.resourcemanager.= security.AMRMTokenSecretManager: Application finished, removing password fo= r appattempt_1431466075462_0001_000001 2015-05-12 21:41:15,507 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: appattempt_1431466075462_0001_000001 State = change from FINAL_SAVING to FAILED 2015-05-12 21:41:15,508 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.RMAppImpl: The number of failed attempts is 1. The max attempts is 2 2015-05-12 21:41:15,508 INFO org.apache.hadoop.yarn.server.resourcemanager.= ApplicationMasterService: Registering app attempt : appattempt_143146607546= 2_0001_000002 2015-05-12 21:41:15,508 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: appattempt_1431466075462_0001_000002 State = change from NEW to SUBMITTED 2015-05-12 21:41:15,508 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.CapacityScheduler: Application Attempt appattempt_143146= 6075462_0001_000001 is done. finalState=3DFAILED 2015-05-12 21:41:15,510 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmcontainer.RMContainerImpl: container_1431466075462_0001_01_000001 Contain= er Transitioned from RUNNING to KILLED 2015-05-12 21:41:15,510 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1431= 466075462_0001_01_000001 in state: KILLED event:KILL 2015-05-12 21:41:15,510 INFO org.apache.hadoop.yarn.server.resourcemanager.= RMAuditLogger: USER=3Dtestuser OPERATION=3DAM Released Container TARGET=3DS= chedulerApp RESULT=3DSUCCESS APPID=3Dapplication_1431466075462_0001 = CONTAINERID=3Dcontainer_1431466075462_0001_01_000001 2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.SchedulerNode: Released container container_1431466075462_0001_01= _000001 of capacity on host ip-10-10-128-10.ec2.int= ernal:9032, which currently has 0 containers, used and= available, release resources=3Dtrue 2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.LeafQueue: default used=3D numContai= ners=3D0 user=3Dtestuser user-resources=3D 2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.LeafQueue: completedContainer container=3DContainer: [Co= ntainerId: container_1431466075462_0001_01_000001, NodeId: ip-10-10-128-10.= ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Reso= urce: , Priority: 0, Token: Token { kind: ContainerT= oken, service: 10.10.128.10:9032 }, ] queue=3Ddefault: capacity=3D1.0, abso= luteCapacity=3D1.0, usedResources=3D, usedCapacity=3D0.= 0, absoluteUsedCapacity=3D0.0, numApps=3D1, numContainers=3D0 cluster=3D 2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.ParentQueue: completedContainer queue=3Droot usedCapacit= y=3D0.0 absoluteUsedCapacity=3D0.0 used=3D cluster=3D 2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default st= ats: default: capacity=3D1.0, absoluteCapacity=3D1.0, usedResources=3D, usedCapacity=3D0.0, absoluteUsedCapacity=3D0.0, numApps=3D= 1, numContainers=3D0 2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.CapacityScheduler: Application attempt appattempt_143146= 6075462_0001_000001 released container container_1431466075462_0001_01_0000= 01 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=3D0 availab= le=3D8192 used=3D0 with event: KILL 2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.AppSchedulingInfo: Application application_1431466075462_0001 req= uests cleared 2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.LeafQueue: Application removed - appId: application_1431= 466075462_0001 user: testuser queue: default #user-pending-applications: 0 = #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-a= pplications: 0 2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.LeafQueue: Application application_1431466075462_0001 fr= om user: testuser activated in queue: default 2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.LeafQueue: Application added - appId: application_143146= 6075462_0001 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.= capacity.LeafQueue$User@5fe8d552, leaf-queue: default= #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-= applications: 0 #queue-active-applications: 1 2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_= 1431466075462_0001_000002 to scheduler from user testuser in queue default 2015-05-12 21:41:15,512 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: appattempt_1431466075462_0001_000002 State = change from SUBMITTED to SCHEDULED 2015-05-12 21:41:15,512 INFO org.apache.hadoop.yarn.server.resourcemanager.= amlauncher.AMLauncher: Cleaning master appattempt_1431466075462_0001_000001 2015-05-12 21:41:15,585 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.CapacityScheduler: Null container completed... 2015-05-12 21:41:15,585 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmcontainer.RMContainerImpl: container_1431466075462_0001_02_000001 Contain= er Transitioned from NEW to ALLOCATED 2015-05-12 21:41:15,585 INFO org.apache.hadoop.yarn.server.resourcemanager.= RMAuditLogger: USER=3Dtestuser OPERATION=3DAM Allocated Container TA= RGET=3DSchedulerApp RESULT=3DSUCCESS APPID=3Dapplication_1431466075462= _0001 CONTAINERID=3Dcontainer_1431466075462_0001_02_000001 2015-05-12 21:41:15,585 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.SchedulerNode: Assigned container container_1431466075462_0001_02= _000001 of capacity on host ip-10-10-128-10.ec2.int= ernal:9032, which has 1 containers, used and available after allocation 2015-05-12 21:41:15,586 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.LeafQueue: assignedContainer application attempt=3Dappat= tempt_1431466075462_0001_000002 container=3DContainer: [ContainerId: contai= ner_1431466075462_0001_02_000001, NodeId: ip-10-10-128-10.ec2.internal:9032= , NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: , Priority: 0, Token: null, ] queue=3Ddefault: capacity=3D1.0, = absoluteCapacity=3D1.0, usedResources=3D, usedCapacity= =3D0.0, absoluteUsedCapacity=3D0.0, numApps=3D1, numContainers=3D0 clusterR= esource=3D 2015-05-12 21:41:15,586 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default sta= ts: default: capacity=3D1.0, absoluteCapacity=3D1.0, usedResources=3D, usedCapacity=3D0.125, absoluteUsedCapacity=3D0.125, numA= pps=3D1, numContainers=3D1 2015-05-12 21:41:15,586 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.ParentQueue: assignedContainer queue=3Droot usedCapacity= =3D0.125 absoluteUsedCapacity=3D0.125 used=3D cluste= r=3D 2015-05-12 21:41:15,587 INFO org.apache.hadoop.yarn.server.resourcemanager.= security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : ip-10-10-12= 8-10.ec2.internal:9032 for container : container_1431466075462_0001_02_0000= 01 2015-05-12 21:41:15,588 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmcontainer.RMContainerImpl: container_1431466075462_0001_02_000001 Contain= er Transitioned from ALLOCATED to ACQUIRED 2015-05-12 21:41:15,588 INFO org.apache.hadoop.yarn.server.resourcemanager.= security.NMTokenSecretManagerInRM: Clear node set for appattempt_1431466075= 462_0001_000002 2015-05-12 21:41:15,588 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1431466= 075462_0001 AttemptId: appattempt_1431466075462_0001_000002 MasterContainer= : Container: [ContainerId: container_1431466075462_0001_02_000001, NodeId: = ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.int= ernal:8090, Resource: , Priority: 0, Token: Token { = kind: ContainerToken, service: 10.10.128.10:9032 }, ] 2015-05-12 21:41:15,588 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: appattempt_1431466075462_0001_000002 State = change from SCHEDULED to ALLOCATED_SAVING 2015-05-12 21:41:15,588 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: appattempt_1431466075462_0001_000002 State = change from ALLOCATED_SAVING to ALLOCATED 2015-05-12 21:41:15,589 INFO org.apache.hadoop.yarn.server.resourcemanager.= amlauncher.AMLauncher: Launching masterappattempt_1431466075462_0001_000002 2015-05-12 21:41:15,590 INFO org.apache.hadoop.yarn.server.resourcemanager.= amlauncher.AMLauncher: Setting up container Container: [ContainerId: contai= ner_1431466075462_0001_02_000001, NodeId: ip-10-10-128-10.ec2.internal:9032= , NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 10= .10.128.10:9032 }, ] for AM appattempt_1431466075462_0001_000002 2015-05-12 21:41:15,590 INFO org.apache.hadoop.yarn.server.resourcemanager.= amlauncher.AMLauncher: Command to launch container container_1431466075462_= 0001_02_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx512m,-Djava.io.tmpdir= =3D{{PWD}}/tmp,'-Dspark.driver.host=3Dip-10-10-127-10.ec2.internal','-Dspar= k.driver.port=3D53747','-Dspark.driver.appUIAddress=3Dhttp://ip-10-10-127-1= 0.ec2.internal:4040','-Dspark.master=3Dyarn-client','-Dspark.fileserver.uri= =3Dhttp://10.10.127.10:37326','-Dspark.executor.extraJavaOptions=3D-XX:+Pri= ntGCDetails -Dkey=3Dvalue -Dnumbers=3D\"one two three\"','-Dspark.yarn.acce= ss.namenodes=3Dhdfs://10.10.10.10:8020','-Dspark.logConf=3Dtrue','-Dspark.s= erializer=3Dorg.apache.spark.serializer.KryoSerializer','-Dspark.executor.i= d=3D','-Dspark.jars=3Dfile:/home/testuser/spark/lib/spark-examples-= 1.3.1-hadoop2.6.0.jar','-Dspark.executor.instances=3D1','-Dspark.app.name= =3DSpark Pi','-Dspark.eventlog.dir=3Dhdfs://10.10.10.10:8020/user/testuser/= spark/eventlog','-Dspark.tachyonStore.folderName=3Dspark-9481ab9a-85db-4bfe= -9d2f-ceb45f31d37c','-Dspark.executor.cores=3D1','-Dspark.eventlog.enabled= =3Dtrue','-Dspark.authenticate=3Dtrue',-Dspark.yarn.app.container.log.dir= =3D,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'ip-10-10-= 127-10.ec2.internal:53747',--executor-memory,1024m,--executor-cores,1,--num= -executors ,1,1>,/stdout,2>,/stderr 2015-05-12 21:41:15,590 INFO org.apache.hadoop.yarn.server.resourcemanager.= security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: a= ppattempt_1431466075462_0001_000002 2015-05-12 21:41:15,590 INFO org.apache.hadoop.yarn.server.resourcemanager.= security.AMRMTokenSecretManager: Creating password for appattempt_143146607= 5462_0001_000002 2015-05-12 21:41:15,607 INFO org.apache.hadoop.yarn.server.resourcemanager.= amlauncher.AMLauncher: Done launching container Container: [ContainerId: co= ntainer_1431466075462_0001_02_000001, NodeId: ip-10-10-128-10.ec2.internal:= 9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service= : 10.10.128.10:9032 }, ] for AM appattempt_1431466075462_0001_000002 2015-05-12 21:41:15,607 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: appattempt_1431466075462_0001_000002 State = change from ALLOCATED to LAUNCHED 2015-05-12 21:41:16,590 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.CapacityScheduler: Null container completed... 2015-05-12 21:41:16,590 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmcontainer.RMContainerImpl: container_1431466075462_0001_02_000001 Contain= er Transitioned from ACQUIRED to COMPLETED 2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1431= 466075462_0001_02_000001 in state: COMPLETED event:FINISHED 2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.server.resourcemanager.= RMAuditLogger: USER=3Dtestuser OPERATION=3DAM Released Container TARGET=3DS= chedulerApp RESULT=3DSUCCESS APPID=3Dapplication_1431466075462_0001 = CONTAINERID=3Dcontainer_1431466075462_0001_02_000001 2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.SchedulerNode: Released container container_1431466075462_0001_02= _000001 of capacity on host ip-10-10-128-10.ec2.int= ernal:9032, which currently has 0 containers, used and= available, release resources=3Dtrue 2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.LeafQueue: default used=3D numContai= ners=3D0 user=3Dtestuser user-resources=3D 2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.LeafQueue: completedContainer container=3DContainer: [Co= ntainerId: container_1431466075462_0001_02_000001, NodeId: ip-10-10-128-10.= ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Reso= urce: , Priority: 0, Token: Token { kind: ContainerT= oken, service: 10.10.128.10:9032 }, ] queue=3Ddefault: capacity=3D1.0, abso= luteCapacity=3D1.0, usedResources=3D, usedCapacity=3D0.= 0, absoluteUsedCapacity=3D0.0, numApps=3D1, numContainers=3D0 cluster=3D 2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.ParentQueue: completedContainer queue=3Droot usedCapacit= y=3D0.0 absoluteUsedCapacity=3D0.0 used=3D cluster=3D 2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default st= ats: default: capacity=3D1.0, absoluteCapacity=3D1.0, usedResources=3D, usedCapacity=3D0.0, absoluteUsedCapacity=3D0.0, numApps=3D= 1, numContainers=3D0 2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.CapacityScheduler: Application attempt appattempt_143146= 6075462_0001_000002 released container container_1431466075462_0001_02_0000= 01 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=3D0 availab= le=3D8192 used=3D0 with event: FINISHED 2015-05-12 21:41:16,600 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_143= 1466075462_0001_000002 with final state: FAILED, and exit status: -1000 2015-05-12 21:41:16,601 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: appattempt_1431466075462_0001_000002 State = change from LAUNCHED to FINAL_SAVING 2015-05-12 21:41:16,601 INFO org.apache.hadoop.yarn.server.resourcemanager.= ApplicationMasterService: Unregistering app attempt : appattempt_1431466075= 462_0001_000002 2015-05-12 21:41:16,601 INFO org.apache.hadoop.yarn.server.resourcemanager.= security.AMRMTokenSecretManager: Application finished, removing password fo= r appattempt_1431466075462_0001_000002 2015-05-12 21:41:16,601 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.attempt.RMAppAttemptImpl: appattempt_1431466075462_0001_000002 State = change from FINAL_SAVING to FAILED 2015-05-12 21:41:16,601 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.RMAppImpl: The number of failed attempts is 2. The max attempts is 2 2015-05-12 21:41:16,602 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.RMAppImpl: Updating application application_1431466075462_0001 with f= inal state: FAILED 2015-05-12 21:41:16,602 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.RMAppImpl: application_1431466075462_0001 State change from ACCEPTED = to FINAL_SAVING 2015-05-12 21:41:16,602 INFO org.apache.hadoop.yarn.server.resourcemanager.= recovery.RMStateStore: Updating info for app: application_1431466075462_000= 1 2015-05-12 21:41:16,603 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.CapacityScheduler: Application Attempt appattempt_143146= 6075462_0001_000002 is done. finalState=3DFAILED 2015-05-12 21:41:16,603 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.AppSchedulingInfo: Application application_1431466075462_0001 req= uests cleared 2015-05-12 21:41:16,603 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.LeafQueue: Application removed - appId: application_1431= 466075462_0001 user: testuser queue: default #user-pending-applications: 0 = #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-a= pplications: 0 2015-05-12 21:41:16,603 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.RMAppImpl: Application application_1431466075462_0001 failed 2 times = due to AM Container for appattempt_1431466075462_0001_000002 exited with e= xitCode: -1000 For more detailed output, check application tracking page:https://ip-10-10-= 127-10.ec2.internal:8090/proxy/application_1431466075462_0001/Then, click o= n links to logs of each attempt. Diagnostics: Application application_1431466075462_0001 initialization fail= ed (exitCode=3D255) with output: main : command provided 0 main : user is testuser main : requested yarn user is testuser Path /tmp/hadoop-yarn/nm-local-dir/usercache/testuser/appcache/application_= 1431466075462_0001 does not have desired permission. Did not create any app directories Failing this attempt. Failing the application. 2015-05-12 21:41:16,604 INFO org.apache.hadoop.yarn.server.resourcemanager.= rmapp.RMAppImpl: application_1431466075462_0001 State change from FINAL_SAV= ING to FAILED 2015-05-12 21:41:16,605 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.ParentQueue: Application removed - appId: application_14= 31466075462_0001 user: testuser leaf-queue of parent: root #applications: 0 2015-05-12 21:41:16,605 WARN org.apache.hadoop.yarn.server.resourcemanager.= RMAuditLogger: USER=3Dtestuser OPERATION=3DApplication Finished - Failed TA= RGET=3DRMAppManager RESULT=3DFAILURE DESCRIPTION=3DApp failed with sta= te: FAILED PERMISSIONS=3DApplication application_1431466075462_0001 failed= 2 times due to AM Container for appattempt_1431466075462_0001_000002 exite= d with exitCode: -1000 For more detailed output, check application tracking page:https://ip-10-10-= 127-10.ec2.internal:8090/proxy/application_1431466075462_0001/Then, click o= n links to logs of each attempt. Diagnostics: Application application_1431466075462_0001 initialization fail= ed (exitCode=3D255) with output: main : command provided 0 main : user is testuser main : requested yarn user is testuser Path /tmp/hadoop-yarn/nm-local-dir/usercache/testuser/appcache/application_= 1431466075462_0001 does not have desired permission. Did not create any app directories Failing this attempt. Failing the application. APPID=3Dapplication_1431466= 075462_0001 2015-05-12 21:41:16,607 INFO org.apache.hadoop.yarn.server.resourcemanager.= RMAppManager$ApplicationSummary: appId=3Dapplication_1431466075462_0001,nam= e=3DSpark Pi,user=3Dtestuser,queue=3Ddefault,state=3DFAILED,trackingUrl=3Dh= ttps://ip-10-10-127-10.ec2.internal:8090/cluster/app/application_1431466075= 462_0001,appMasterHost=3DN/A,startTime=3D1431466159129,finishTime=3D1431466= 876602,finalStatus=3DFAILED 2015-05-12 21:41:16,629 INFO org.apache.hadoop.hdfs.DFSClient: Cancelling H= DFS_DELEGATION_TOKEN token 16 for testuser on 10.10.10.10:8020 2015-05-12 21:41:17,593 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.CapacityScheduler: Null container completed... 2015-05-12 21:41:17,594 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.CapacityScheduler: Null container completed... 2015-05-12 21:41:18,597 INFO org.apache.hadoop.yarn.server.resourcemanager.= scheduler.capacity.CapacityScheduler: Null container completed... [yarn@ip-10-10-127-10 hadoop]$ Keith Nance Sr. Software Engineer *Email: knance@smartronix.com * Cell: 808-343-0071 [cid:image002.jpg@01CA58DB.D44B0990] www.smartronix.com --_000_0EE80F6F7A98A64EBD18F2BE839C911567800F0Dszxeml512mbschi_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

Hi, =

 

This looks like permis= sion issue in secure mode,

Path /tmp/hadoop-yarn/nm-local-dir/usercache/testuse= r/appcache/application_1431466075462_0001 does not have desired permission.

 

Would you confirm the directories permissions as per= below

http://hadoop= .apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html#= Configuration

 

Thanks & Regards

Rohith Sharma K S=

From: Nance, K= eith [mailto:knance@smartronix.com]
Sent: 13 May 2015 05:07
To: user@hadoop.apache.org
Subject: spark job hangs/fails with localizer failing

 

At wits end…unable to get a simple Spark Pi ap= plication to run on a Secured Yarn cluster.  Help is MUCH appreciated.=

Below are the log entries for the Spark Job, Node Ma= nager, and Resource Manager.

 

###: SPARK USER/JOB :###

[testuser@ip-10-10-127-10 spark]$ ./bin/spark-submit= --verbose --class org.apache.spark.examples.SparkPi --master yarn-client -= -num-executors 1 --executor-cores 1 lib/spark-examples*.jar 10

/home/testuser/spark/conf/spark-env.sh: line 54: -Dspark.history.kerberos.principal=3Dspark/ip-10-10-127-10.ec2.internal@MAL= ARD.LOCAL: No such file or directory

Using properties file: /home/testuser/spark/conf/spa= rk-defaults.conf

Adding default property: spark.serializer=3Dorg.apac= he.spark.serializer.KryoSerializer

Adding default property: spark.executor.extraJavaOpt= ions=3D-XX:+PrintGCDetails -Dkey=3Dvalue -Dnumbers=3D"one two thre= e"

Adding default property: spark.yarn.access.namenodes= =3Dhdfs://10.10.10.10:8020

Adding default property: spark.logConf=3Dtrue

Adding default property: spark.eventlog.dir=3Dhdfs:/= /10.10.10.10:8020/user/testuser/spark/eventlog

Adding default property: spark.master=3Dyarn-client<= o:p>

Adding default property: spark.authenticate=3Dtrue

Adding default property: spark.eventlog.enabled=3Dtr= ue

Parsed arguments:

  master      &nb= sp;           yarn-client=

  deployMode      = ;        null

  executorMemory     &= nbsp;    null

  executorCores     &n= bsp;     1

  totalExecutorCores    &nb= sp; null

  propertiesFile     &= nbsp;    /home/testuser/spark/conf/spark-defaults.conf<= /o:p>

  driverMemory     &nb= sp;      null

  driverCores     &nbs= p;       null

  driverExtraClassPath    null

  driverExtraLibraryPath  null<= /p>

  driverExtraJavaOptions  null<= /p>

  supervise      =          false

  queue      &nbs= p;            null

  numExecutors     &nb= sp;      1

  files      &nbs= p;            null

  pyFiles       &= nbsp;         null<= /p>

  archives      &= nbsp;         null

  mainClass      =          org.apache.spark.examples.= SparkPi

  primaryResource     =     file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoo= p2.6.0.jar

  name       = ;             o= rg.apache.spark.examples.SparkPi

  childArgs      =          [10]

  jars       = ;             n= ull

  packages      &= nbsp;         null

  repositories     &nb= sp;      null

  verbose      &n= bsp;          true<= /p>

 

Spark properties used, including those specified thr= ough

--conf and those from the properties file /home/test= user/spark/conf/spark-defaults.conf:

  spark.yarn.access.namenodes -> hdfs://10.1= 0.10.10:8020

  spark.logConf -> true

  spark.eventlog.dir -> hdfs://10.10.10.10:8= 020/user/testuser/spark/eventlog

  spark.authenticate -> true

  spark.serializer -> org.apache.spark.seria= lizer.KryoSerializer

  spark.executor.extraJavaOptions -> -XX:= 3;PrintGCDetails -Dkey=3Dvalue -Dnumbers=3D"one two three"

  spark.master -> yarn-client

  spark.eventlog.enabled -> true<= /p>

 

 

Main class:

org.apache.spark.examples.SparkPi

Arguments:

10

System properties:

spark.yarn.access.namenodes -> hdfs://10.10.10.10= :8020

spark.executor.instances -> 1

spark.logConf -> true

spark.eventlog.dir -> hdfs://10.10.10.10:8020/use= r/testuser/spark/eventlog

spark.authenticate -> true

SPARK_SUBMIT -> true

spark.serializer -> org.apache.spark.serializer.K= ryoSerializer

spark.executor.extraJavaOptions -> -XX:+Print= GCDetails -Dkey=3Dvalue -Dnumbers=3D"one two three"

spark.app.name -> org.apache.spark.examples.Spark= Pi

spark.jars -> file:/home/testuser/spark/lib/spark= -examples-1.3.1-hadoop2.6.0.jar

spark.master -> yarn-client

spark.executor.cores -> 1

spark.eventlog.enabled -> true

Classpath elements:

file:/home/testuser/spark/lib/spark-examples-1.3.1-h= adoop2.6.0.jar

 

 

15/05/12 21:29:03 INFO spark.SparkContext: Running S= park version 1.3.1

15/05/12 21:29:03 INFO spark.SparkContext: Spark con= figuration:

spark.app.name=3DSpark Pi

spark.authenticate=3Dtrue

spark.eventlog.dir=3Dhdfs://10.10.10.10:8020/user/te= stuser/spark/eventlog

spark.eventlog.enabled=3Dtrue

spark.executor.cores=3D1

spark.executor.extraJavaOptions=3D-XX:+PrintGCDe= tails -Dkey=3Dvalue -Dnumbers=3D"one two three"

spark.executor.instances=3D1

spark.jars=3Dfile:/home/testuser/spark/lib/spark-exa= mples-1.3.1-hadoop2.6.0.jar

spark.logConf=3Dtrue

spark.master=3Dyarn-client

spark.serializer=3Dorg.apache.spark.serializer.KryoS= erializer

spark.yarn.access.namenodes=3Dhdfs://10.10.10.10:802= 0

15/05/12 21:29:04 WARN util.NativeCodeLoader: Unable= to load native-hadoop library for your platform... using builtin-java clas= ses where applicable

15/05/12 21:29:04 INFO spark.SecurityManager: Changi= ng view acls to: testuser

15/05/12 21:29:04 INFO spark.SecurityManager: Changi= ng modify acls to: testuser

15/05/12 21:29:05 INFO spark.SecurityManager: adding= secret to credentials in yarn mode

15/05/12 21:29:05 INFO spark.SecurityManager: Securi= tyManager: authentication enabled; ui acls disabled; users with view permis= sions: Set(testuser); users with modify permissions: Set(testuser)

15/05/12 21:29:07 INFO slf4j.Slf4jLogger: Slf4jLogge= r started

15/05/12 21:29:07 INFO Remoting: Starting remoting

15/05/12 21:29:07 INFO util.Utils: Successfully star= ted service 'sparkDriver' on port 53747.

15/05/12 21:29:07 INFO Remoting: Remoting started; l= istening on addresses :[akka.tcp://sparkDriver@ip-10-10-127-10.ec2.internal= :53747]

15/05/12 21:29:07 INFO spark.SparkEnv: Registering M= apOutputTracker

15/05/12 21:29:07 INFO spark.SparkEnv: Registering B= lockManagerMaster

15/05/12 21:29:07 INFO storage.DiskBlockManager: Cre= ated local directory at /tmp/spark-0877f4af-cae7-4d4c-b9f3-434712e8a654/blo= ckmgr-f202f0b4-3842-40f5-934f-43fd764e641a

15/05/12 21:29:07 INFO storage.MemoryStore: MemorySt= ore started with capacity 267.3 MB

15/05/12 21:29:08 INFO spark.HttpFileServer: HTTP Fi= le server directory is /tmp/spark-20236d03-2f2c-4619-bd6b-859fbd4b18b9/http= d-3ac7254f-3216-4181-8323-bb7493bfea2a

15/05/12 21:29:08 INFO spark.HttpServer: Starting HT= TP Server

15/05/12 21:29:08 INFO server.Server: jetty-8.y.z-SN= APSHOT

15/05/12 21:29:08 INFO server.AbstractConnector: Sta= rted SocketConnector@0.0.0.0:37326

15/05/12 21:29:08 INFO util.Utils: Successfully star= ted service 'HTTP file server' on port 37326.

15/05/12 21:29:08 INFO spark.SparkEnv: Registering O= utputCommitCoordinator

15/05/12 21:29:08 INFO server.Server: jetty-8.y.z-SN= APSHOT

15/05/12 21:29:08 INFO server.AbstractConnector: Sta= rted SelectChannelConnector@0.0.0.0:4040

15/05/12 21:29:08 INFO util.Utils: Successfully star= ted service 'SparkUI' on port 4040.

15/05/12 21:29:08 INFO ui.SparkUI: Started SparkUI a= t http://ip-10-10-127-10.ec2.internal:4040

15/05/12 21:29:09 INFO spark.SparkContext: Added JAR= file:/home/testuser/spark/lib/spark-examples-1.3.1-hadoop2.6.0.jar at http://10.10.127.10:37326/jars/spark-examples-1.3.1-hadoop2.6.0.jar with timestamp 1431466149854

15/05/12 21:29:10 INFO client.RMProxy: Connecting to= ResourceManager at ip-10-10-127-10.ec2.internal/10.10.127.10:8032

15/05/12 21:29:11 INFO yarn.Client: Requesting a new= application from cluster with 1 NodeManagers

15/05/12 21:29:11 INFO yarn.Client: Verifying our ap= plication has not requested more than the maximum memory capability of the = cluster (8192 MB per container)

15/05/12 21:29:11 INFO yarn.Client: Will allocate AM= container, with 896 MB memory including 384 MB overhead

15/05/12 21:29:11 INFO yarn.Client: Setting up conta= iner launch context for our AM

15/05/12 21:29:11 INFO yarn.Client: Preparing resour= ces for our AM container

15/05/12 21:29:13 INFO hdfs.DFSClient: Created HDFS_= DELEGATION_TOKEN token 16 for testuser on 10.10.10.10:8020

15/05/12 21:29:13 INFO yarn.Client: Uploading resour= ce file:/home/testuser/spark/lib/spark-assembly-1.3.1-hadoop2.6.0.jar ->= hdfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_1431466075= 462_0001/spark-assembly-1.3.1-hadoop2.6.0.jar

15/05/12 21:29:18 INFO yarn.Client: Setting up the l= aunch environment for our AM container

15/05/12 21:29:18 INFO spark.SecurityManager: Changi= ng view acls to: testuser

15/05/12 21:29:18 INFO spark.SecurityManager: Changi= ng modify acls to: testuser

15/05/12 21:29:18 INFO spark.SecurityManager: Securi= tyManager: authentication enabled; ui acls disabled; users with view permis= sions: Set(testuser); users with modify permissions: Set(testuser)

15/05/12 21:29:18 INFO yarn.Client: Submitting appli= cation 1 to ResourceManager

15/05/12 21:29:21 INFO impl.YarnClientImpl: Submitte= d application application_1431466075462_0001

15/05/12 21:29:22 INFO yarn.Client: Application repo= rt for application_1431466075462_0001 (state: ACCEPTED)

15/05/12 21:29:22 INFO yarn.Client:

         cli= ent token: Token { kind: YARN_CLIENT_TOKEN, service:  }

         dia= gnostics: N/A

         App= licationMaster host: N/A

         App= licationMaster RPC port: -1

         que= ue: default

         sta= rt time: 1431466159129

         fin= al status: UNDEFINED

         tra= cking URL: https://ip-10-10-127-10.ec2.internal:8090/proxy/application_1431466075462_0= 001/

         use= r: testuser

15/05/12 21:29:23 INFO yarn.Client: Application repo= rt for application_1431466075462_0001 (state: ACCEPTED)

15/05/12 21:29:24 INFO yarn.Client: Application repo= rt for application_1431466075462_0001 (state: ACCEPTED)

15/05/12 21:29:25 INFO yarn.Client: Application repo= rt for application_1431466075462_0001 (state: ACCEPTED)

...(TRUNCATED)...

15/05/12 21:41:13 INFO yarn.Client: Application repo= rt for application_1431466075462_0001 (state: ACCEPTED)

15/05/12 21:41:14 INFO yarn.Client: Application repo= rt for application_1431466075462_0001 (state: ACCEPTED)

15/05/12 21:41:15 INFO yarn.Client: Application repo= rt for application_1431466075462_0001 (state: ACCEPTED)

15/05/12 21:41:16 INFO yarn.Client: Application repo= rt for application_1431466075462_0001 (state: FAILED)

15/05/12 21:41:16 INFO yarn.Client:

         cli= ent token: N/A

         dia= gnostics: Application application_1431466075462_0001 failed 2 times due to = AM Container for appattempt_1431466075462_0001_000002 exited with  exi= tCode: -1000

For more detailed output, check application tracking= page:https://ip-10-10-127-10.ec2.internal:8090/proxy/application_143146607= 5462_0001/Then, click on links to logs of each attempt.

Diagnostics: Application application_1431466075462_0= 001 initialization failed (exitCode=3D255) with output: main : command prov= ided 0

main : user is testuser

main : requested yarn user is testuser

Path /tmp/hadoop-yarn/nm-local-dir/usercache/testuse= r/appcache/application_1431466075462_0001 does not have desired permission.=

Did not create any app directories

 

Failing this attempt. Failing the application.<= /o:p>

         App= licationMaster host: N/A

        Appl= icationMaster RPC port: -1

         que= ue: default

         sta= rt time: 1431466159129

         fin= al status: FAILED

         tra= cking URL: https://ip-10-10-127-10.ec2.internal:8090/cluster/app/application_143146607= 5462_0001

         use= r: testuser

Exception in thread "main" org.apache.spar= k.SparkException: Yarn application has already ended! It might have been ki= lled or unable to launch application master.

        at org.ap= ache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(= YarnClientSchedulerBackend.scala:113)

        at org.ap= ache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSch= edulerBackend.scala:59)

        at org.ap= ache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)

        at org.ap= ache.spark.SparkContext.<init>(SparkContext.scala:381)

        at org.ap= ache.spark.examples.SparkPi$.main(SparkPi.scala:28)

        at org.ap= ache.spark.examples.SparkPi.main(SparkPi.scala)

        at sun.re= flect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.re= flect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

        at sun.re= flect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java= :43)

        at java.l= ang.reflect.Method.invoke(Method.java:606)

        at org.ap= ache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain= (SparkSubmit.scala:569)

        at org.ap= ache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)

        at org.ap= ache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)

        at org.ap= ache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)

        at org.ap= ache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

[testuser@ip-10-10-127-10 spark]$

 

 

####: NODEMANAGER :####

2015-05-12 21:29:22,237 INFO SecurityLogger.org.apac= he.hadoop.ipc.Server: Auth successful for appattempt_1431466075462_0001_000= 001 (auth:SIMPLE)

2015-05-12 21:29:22,408 WARN org.apache.hadoop.secur= ity.ShellBasedUnixGroupsMapping: got exception trying to get groups for use= r appattempt_1431466075462_0001_000001: id: appattempt_1431466075462_0001_0= 00001: no such user

 

2015-05-12 21:29:22,409 WARN org.apache.hadoop.secur= ity.UserGroupInformation: No groups available for user appattempt_143146607= 5462_0001_000001

2015-05-12 21:29:22,409 INFO SecurityLogger.org.apac= he.hadoop.security.authorize.ServiceAuthorizationManager: Authorization suc= cessful for appattempt_1431466075462_0001_000001 (auth:TOKEN) for protocol= =3Dinterface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB<= /o:p>

2015-05-12 21:29:22,570 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.ContainerManagerImpl: Start request for= container_1431466075462_0001_01_000001 by user testuser

2015-05-12 21:29:22,648 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.ContainerManagerImpl: Creating a new ap= plication reference for app application_1431466075462_0001

2015-05-12 21:29:22,663 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.application.Application: Application ap= plication_1431466075462_0001 transitioned from NEW to INITING

2015-05-12 21:29:22,679 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.application.Application: Application ap= plication_1431466075462_0001 transitioned from INITING to RUNNING

2015-05-12 21:29:22,680 INFO org.apache.hadoop.yarn.= server.nodemanager.NMAuditLogger: USER=3Dtestuser     I= P=3D10.10.127.10 OPERATION=3DStart Container Request    = ;   TARGET=3DContainerManageImpl RESULT=3DSUCCESS   APP= ID=3Dapplication_1431466075462_0001    CONTAINERID=3Dcontainer_1431466075462_0001_01_000001

2015-05-12 21:29:22,689 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.application.Application: Adding contain= er_1431466075462_0001_01_000001 to application application_1431466075462_00= 01

2015-05-12 21:29:22,707 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.container.Container: Container containe= r_1431466075462_0001_01_000001 transitioned from NEW to LOCALIZING

2015-05-12 21:29:22,707 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT f= or appId application_1431466075462_0001

2015-05-12 21:29:22,773 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.localizer.LocalizedResource: Resource h= dfs://10.10.10.10:8020/user/testuser/.sparkStaging/application_143146607546= 2_0001/spark-assembly-1.3.1-hadoop2.6.0.jar transitioned from INIT to DOWNLOADING

2015-05-12 21:29:22,773 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.localizer.ResourceLocalizationService: = Created localizer for container_1431466075462_0001_01_000001

2015-05-12 21:29:23,306 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.localizer.ResourceLocalizationService: = Writing credentials to the nmPrivate file /tmp/hadoop-yarn/nm-local-dir/nmP= rivate/container_1431466075462_0001_01_000001.tokens. Credentials list:

2015-05-12 21:41:15,528 INFO SecurityLogger.org.apac= he.hadoop.ipc.Server: Auth successful for appattempt_1431466075462_0001_000= 001 (auth:SIMPLE)

2015-05-12 21:41:15,535 WARN org.apache.hadoop.secur= ity.ShellBasedUnixGroupsMapping: got exception trying to get groups for use= r appattempt_1431466075462_0001_000001: id: appattempt_1431466075462_0001_0= 00001: no such user

 

2015-05-12 21:41:15,536 WARN org.apache.hadoop.secur= ity.UserGroupInformation: No groups available for user appattempt_143146607= 5462_0001_000001

2015-05-12 21:41:15,536 INFO SecurityLogger.org.apac= he.hadoop.security.authorize.ServiceAuthorizationManager: Authorization suc= cessful for appattempt_1431466075462_0001_000001 (auth:TOKEN) for protocol= =3Dinterface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB<= /o:p>

2015-05-12 21:41:15,539 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.ContainerManagerImpl: Stopping containe= r with container Id: container_1431466075462_0001_01_000001

2015-05-12 21:41:15,539 INFO org.apache.hadoop.yarn.= server.nodemanager.NMAuditLogger: USER=3Dtestuser     I= P=3D10.10.127.10 OPERATION=3DStop Container Request    =     TARGET=3DContainerManageImpl RESULT=3DSUCCESS  = ; APPID=3Dapplication_1431466075462_0001    CONTAINERID=3Dcontainer_1431466075462_0001_01_000001

2015-05-12 21:41:15,566 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.container.Container: Container containe= r_1431466075462_0001_01_000001 transitioned from LOCALIZING to KILLING=

2015-05-12 21:41:15,568 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.container.Container: Container containe= r_1431466075462_0001_01_000001 transitioned from KILLING to DONE=

2015-05-12 21:41:15,568 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.application.Application: Removing conta= iner_1431466075462_0001_01_000001 from application application_143146607546= 2_0001

2015-05-12 21:41:15,569 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP f= or appId application_1431466075462_0001

2015-05-12 21:41:15,594 INFO SecurityLogger.org.apac= he.hadoop.ipc.Server: Auth successful for appattempt_1431466075462_0001_000= 002 (auth:SIMPLE)

2015-05-12 21:41:15,602 WARN org.apache.hadoop.secur= ity.ShellBasedUnixGroupsMapping: got exception trying to get groups for use= r appattempt_1431466075462_0001_000002: id: appattempt_1431466075462_0001_0= 00002: no such user

 

2015-05-12 21:41:15,602 WARN org.apache.hadoop.secur= ity.UserGroupInformation: No groups available for user appattempt_143146607= 5462_0001_000002

2015-05-12 21:41:15,603 INFO SecurityLogger.org.apac= he.hadoop.security.authorize.ServiceAuthorizationManager: Authorization suc= cessful for appattempt_1431466075462_0001_000002 (auth:TOKEN) for protocol= =3Dinterface org.apache.hadoop.yarn.api.ContainerManagementProtocolPB<= /o:p>

2015-05-12 21:41:15,604 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.ContainerManagerImpl: Start request for= container_1431466075462_0001_02_000001 by user testuser

2015-05-12 21:41:15,604 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.application.Application: Adding contain= er_1431466075462_0001_02_000001 to application application_1431466075462_00= 01

2015-05-12 21:41:15,605 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.container.Container: Container containe= r_1431466075462_0001_02_000001 transitioned from NEW to LOCALIZING

2015-05-12 21:41:15,605 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT f= or appId application_1431466075462_0001

2015-05-12 21:41:15,605 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.localizer.ResourceLocalizationService: = Created localizer for container_1431466075462_0001_02_000001

2015-05-12 21:41:15,604 INFO org.apache.hadoop.yarn.= server.nodemanager.NMAuditLogger: USER=3Dtestuser     I= P=3D10.10.127.10 OPERATION=3DStart Container Request    = ;   TARGET=3DContainerManageImpl RESULT=3DSUCCESS   APP= ID=3Dapplication_1431466075462_0001    CONTAINERID=3Dcontainer_1431466075462_0001_02_000001

2015-05-12 21:41:15,620 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.localizer.ResourceLocalizationService: = Writing credentials to the nmPrivate file /tmp/hadoop-yarn/nm-local-dir/nmP= rivate/container_1431466075462_0001_02_000001.tokens. Credentials list:

2015-05-12 21:41:15,672 WARN org.apache.hadoop.yarn.= server.nodemanager.LinuxContainerExecutor: Exit code from container contain= er_1431466075462_0001_02_000001 startLocalizer is : 255

ExitCodeException exitCode=3D255:

        at org.ap= ache.hadoop.util.Shell.runCommand(Shell.java:538)

        at org.ap= ache.hadoop.util.Shell.run(Shell.java:455)

        at org.ap= ache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)

        at org.ap= ache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.startLocalizer(L= inuxContainerExecutor.java:232)

        at org.ap= ache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLoca= lizationService$LocalizerRunner.run(ResourceLocalizationService.java:1088)<= o:p>

2015-05-12 21:41:15,673 INFO org.apache.hadoop.yarn.= server.nodemanager.ContainerExecutor: main : command provided 0<= /p>

2015-05-12 21:41:15,673 INFO org.apache.hadoop.yarn.= server.nodemanager.ContainerExecutor: main : user is testuser

2015-05-12 21:41:15,673 INFO org.apache.hadoop.yarn.= server.nodemanager.ContainerExecutor: main : requested yarn user is testuse= r

2015-05-12 21:41:15,673 INFO org.apache.hadoop.yarn.= server.nodemanager.ContainerExecutor: Path /tmp/hadoop-yarn/nm-local-dir/us= ercache/testuser/appcache/application_1431466075462_0001 does not have desi= red permission.

2015-05-12 21:41:15,673 INFO org.apache.hadoop.yarn.= server.nodemanager.ContainerExecutor: Did not create any app directories

2015-05-12 21:41:15,673 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.localizer.ResourceLocalizationService: = Localizer failed

java.io.IOException: Application application_1431466= 075462_0001 initialization failed (exitCode=3D255) with output: main : comm= and provided 0

main : user is testuser

main : requested yarn user is testuser

Path /tmp/hadoop-yarn/nm-local-dir/usercache/testuse= r/appcache/application_1431466075462_0001 does not have desired permission.=

Did not create any app directories

 

        at org.ap= ache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.startLocalizer(L= inuxContainerExecutor.java:241)

        at org.ap= ache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLoca= lizationService$LocalizerRunner.run(ResourceLocalizationService.java:1088)<= o:p>

Caused by: ExitCodeException exitCode=3D255:

        at org.ap= ache.hadoop.util.Shell.runCommand(Shell.java:538)

        at org.ap= ache.hadoop.util.Shell.run(Shell.java:455)

        at org.ap= ache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)

        at org.ap= ache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.startLocalizer(L= inuxContainerExecutor.java:232)

        ... 1 mor= e

2015-05-12 21:41:15,674 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.container.Container: Container containe= r_1431466075462_0001_02_000001 transitioned from LOCALIZING to LOCALIZATION= _FAILED

2015-05-12 21:41:15,675 WARN org.apache.hadoop.yarn.= server.nodemanager.NMAuditLogger: USER=3Dtestuser     O= PERATION=3DContainer Finished - Failed   TARGET=3DContainerImpl&n= bsp;   RESULT=3DFAILURE  DESCRIPTION=3DContainer failed with= state: LOCALIZATION_FAILED        APPID=3Dapplication_1431466075462_0001    CONTAINERID=3Dcon= tainer_1431466075462_0001_02_000001

2015-05-12 21:41:15,675 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.container.Container: Container containe= r_1431466075462_0001_02_000001 transitioned from LOCALIZATION_FAILED to DON= E

2015-05-12 21:41:15,675 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.application.Application: Removing conta= iner_1431466075462_0001_02_000001 from application application_143146607546= 2_0001

2015-05-12 21:41:15,675 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP f= or appId application_1431466075462_0001

2015-05-12 21:41:17,593 INFO org.apache.hadoop.yarn.= server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from= NM context: [container_1431466075462_0001_02_000001]

2015-05-12 21:41:17,594 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.application.Application: Application ap= plication_1431466075462_0001 transitioned from RUNNING to APPLICATION_RESOU= RCES_CLEANINGUP

2015-05-12 21:41:17,594 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP= for appId application_1431466075462_0001

2015-05-12 21:41:17,595 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.application.Application: Application ap= plication_1431466075462_0001 transitioned from APPLICATION_RESOURCES_CLEANI= NGUP to FINISHED

2015-05-12 21:41:17,595 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Sc= heduling Log Deletion for application: application_1431466075462_0001, with= delay of 10800 seconds

2015-05-12 21:41:17,654 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping= resource-monitoring for container_1431466075462_0001_01_000001<= /p>

2015-05-12 21:41:17,654 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping= resource-monitoring for container_1431466075462_0001_02_000001<= /p>

2015-05-12 21:44:48,514 INFO org.apache.hadoop.yarn.= server.nodemanager.containermanager.localizer.ResourceLocalizationService: = Localizer failed

java.io.IOException: java.lang.InterruptedException<= o:p>

        at org.ap= ache.hadoop.util.Shell.runCommand(Shell.java:541)

        at org.ap= ache.hadoop.util.Shell.run(Shell.java:455)

        at org.ap= ache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)

        at org.ap= ache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.startLocalizer(L= inuxContainerExecutor.java:232)

        at org.ap= ache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLoca= lizationService$LocalizerRunner.run(ResourceLocalizationService.java:1088)<= o:p>

2015-05-12 21:44:48,514 WARN org.apache.hadoop.yarn.= server.nodemanager.containermanager.ContainerManagerImpl: Event EventType: = RESOURCE_FAILED sent to absent container container_1431466075462_0001_01_00= 0001

[yarn@ip-10-10-128-10 hadoop]$

 

 

 

####: RESOURCEMANAGER :####

2015-05-12 21:29:11,624 INFO SecurityLogger.org.apac= he.hadoop.ipc.Server: Auth successful for testuser@MALARD.LOCAL (auth:KE= RBEROS)

2015-05-12 21:29:11,702 INFO SecurityLogger.org.apac= he.hadoop.security.authorize.ServiceAuthorizationManager: Authorization suc= cessful for testuser@MALARD.LOCAL (auth:KE= RBEROS) for protocol=3Dinterface org.apache.hadoop.yarn.api.ApplicationClie= ntProtocolPB

2015-05-12 21:29:11,807 INFO org.apache.hadoop.yarn.= server.resourcemanager.ClientRMService: Allocated new applicationId: 1=

2015-05-12 21:29:19,129 WARN org.apache.hadoop.yarn.= server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for ap= plication: 1 is invalid, because it is out of the range [1, 2]. Use the glo= bal max attempts instead.

2015-05-12 21:29:19,144 INFO org.apache.hadoop.yarn.= server.resourcemanager.ClientRMService: Application with id 1 submitted by = user testuser

2015-05-12 21:29:19,145 INFO org.apache.hadoop.yarn.= server.resourcemanager.RMAuditLogger: USER=3Dtestuser IP=3D10.10.127.10 OPE= RATION=3DSubmit Application Request    TARGET=3DClientRMServ= ice  RESULT=3DSUCCESS     APPID=3Dapplication_1431= 466075462_0001

2015-05-12 21:29:19,535 INFO org.apache.hadoop.yarn.= server.resourcemanager.security.DelegationTokenRenewer: application_1431466= 075462_0001 found existing hdfs token Kind: HDFS_DELEGATION_TOKEN, Service:= 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN token 16 for testuser)

2015-05-12 21:29:21,558 INFO org.apache.hadoop.yarn.= server.resourcemanager.security.DelegationTokenRenewer: Renewed delegation-= token=3D [Kind: HDFS_DELEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (H= DFS_DELEGATION_TOKEN token 16 for testuser);exp=3D1431552561549], for application_1431466075462_0001

2015-05-12 21:29:21,559 INFO org.apache.hadoop.yarn.= server.resourcemanager.security.DelegationTokenRenewer: Renew Kind: HDFS_DE= LEGATION_TOKEN, Service: 10.10.10.10:8020, Ident: (HDFS_DELEGATION_TOKEN to= ken 16 for testuser);exp=3D1431552561549 in 86399991 ms, appId =3D application_1431466075462_0001

2015-05-12 21:29:21,559 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.RMAppImpl: Storing application with id applica= tion_1431466075462_0001

2015-05-12 21:29:21,561 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.RMAppImpl: application_1431466075462_0001 Stat= e change from NEW to NEW_SAVING

2015-05-12 21:29:21,575 INFO org.apache.hadoop.yarn.= server.resourcemanager.recovery.RMStateStore: Storing info for app: applica= tion_1431466075462_0001

2015-05-12 21:29:21,589 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.RMAppImpl: application_1431466075462_0001 Stat= e change from NEW_SAVING to SUBMITTED

2015-05-12 21:29:21,591 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.ParentQueue: Application added - = appId: application_1431466075462_0001 user: testuser leaf-queue of parent: = root #applications: 1

2015-05-12 21:29:21,591 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.CapacityScheduler: Accepted appli= cation application_1431466075462_0001 from user: testuser, in queue: defaul= t

2015-05-12 21:29:21,593 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.RMAppImpl: application_1431466075462_0001 Stat= e change from SUBMITTED to ACCEPTED

2015-05-12 21:29:21,647 INFO org.apache.hadoop.yarn.= server.resourcemanager.ApplicationMasterService: Registering app attempt : = appattempt_1431466075462_0001_000001

2015-05-12 21:29:21,648 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_143146607= 5462_0001_000001 State change from NEW to SUBMITTED

2015-05-12 21:29:21,671 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.LeafQueue: Application applicatio= n_1431466075462_0001 from user: testuser activated in queue: default

2015-05-12 21:29:21,671 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.LeafQueue: Application added - ap= pId: application_1431466075462_0001 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$= User@73e84841, leaf-queue: default #user-pending-applications: 0 #user-= active-applications: 1 #queue-pending-applications: 0 #queue-active-applica= tions: 1

2015-05-12 21:29:21,671 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Applicat= ion Attempt appattempt_1431466075462_0001_000001 to scheduler from user tes= tuser in queue default

2015-05-12 21:29:21,673 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_143146607= 5462_0001_000001 State change from SUBMITTED to SCHEDULED

2015-05-12 21:29:21,869 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmcontainer.RMContainerImpl: container_1431466075462= _0001_01_000001 Container Transitioned from NEW to ALLOCATED

2015-05-12 21:29:21,869 INFO org.apache.hadoop.yarn.= server.resourcemanager.RMAuditLogger: USER=3Dtestuser OPERATION=3DAM Alloca= ted Container        TARGET=3DSchedulerA= pp     RESULT=3DSUCCESS  APPID=3Dapplication_14314= 66075462_0001       CONTAINERID=3Dcontainer_1= 431466075462_0001_01_000001

2015-05-12 21:29:21,870 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.SchedulerNode: Assigned container containe= r_1431466075462_0001_01_000001 of capacity <memory:1024, vCores:1> on= host ip-10-10-128-10.ec2.internal:9032, which has 1 containers, <memory:1024, vCores:1> used and <memory:= 7168, vCores:7> available after allocation

2015-05-12 21:29:21,870 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer appl= ication attempt=3Dappattempt_1431466075462_0001_000001 container=3DContaine= r: [ContainerId: container_1431466075462_0001_01_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-1= 0.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, = Token: null, ] queue=3Ddefault: capacity=3D1.0, absoluteCapacity=3D1.0, use= dResources=3D<memory:0, vCores:0>, usedCapacity=3D0.0, absoluteUsedCapacity=3D0.0, numApps=3D1, numContainers=3D0 clusterResource= =3D<memory:8192, vCores:8>

2015-05-12 21:29:21,870 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned = queue: root.default stats: default: capacity=3D1.0, absoluteCapacity=3D1.0,= usedResources=3D<memory:1024, vCores:1>, usedCapacity=3D0.125, absoluteUsedCapacity=3D0.125, numApps=3D1, numContai= ners=3D1

2015-05-12 21:29:21,870 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer qu= eue=3Droot usedCapacity=3D0.125 absoluteUsedCapacity=3D0.125 used=3D<mem= ory:1024, vCores:1> cluster=3D<memory:8192, vCores:8>

2015-05-12 21:29:21,891 INFO org.apache.hadoop.yarn.= server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken f= or nodeId : ip-10-10-128-10.ec2.internal:9032 for container : container_143= 1466075462_0001_01_000001

2015-05-12 21:29:21,907 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmcontainer.RMContainerImpl: container_1431466075462= _0001_01_000001 Container Transitioned from ALLOCATED to ACQUIRED

2015-05-12 21:29:21,907 INFO org.apache.hadoop.yarn.= server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set fo= r appattempt_1431466075462_0001_000001

2015-05-12 21:29:21,910 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: App= Id: application_1431466075462_0001 AttemptId: appattempt_1431466075462_0001= _000001 MasterContainer: Container: [ContainerId: container_1431466075462_0001_01_000001, NodeId: ip-10-10-128= -10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, = Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: = ContainerToken, service: 10.10.128.10:9032 }, ]

2015-05-12 21:29:21,927 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_143146607= 5462_0001_000001 State change from SCHEDULED to ALLOCATED_SAVING=

2015-05-12 21:29:21,942 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_143146607= 5462_0001_000001 State change from ALLOCATED_SAVING to ALLOCATED=

2015-05-12 21:29:21,945 INFO org.apache.hadoop.yarn.= server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_14= 31466075462_0001_000001

2015-05-12 21:29:21,970 INFO org.apache.hadoop.yarn.= server.resourcemanager.amlauncher.AMLauncher: Setting up container Containe= r: [ContainerId: container_1431466075462_0001_01_000001, NodeId: ip-10-10-1= 28-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>= , Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9= 032 }, ] for AM appattempt_1431466075462_0001_000001

2015-05-12 21:29:21,970 INFO org.apache.hadoop.yarn.= server.resourcemanager.amlauncher.AMLauncher: Command to launch container c= ontainer_1431466075462_0001_01_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx= 512m,-Djava.io.tmpdir=3D{{PWD}}/tmp,'-Dspark.driver.host=3Dip-10-10-127-10.= ec2.internal','-Dspark.driver.port=3D53747','-Dspark.driver.appUIAddress=3D= http://ip-10-10-127-10.ec2.internal:4040','-Dspark.master=3Dyarn-client','-= Dspark.fileserver.uri=3Dhttp://10.10.127.10:37326','-Dspark.executor.extraJ= avaOptions=3D-XX:+PrintGCDetails -Dkey=3Dvalue -Dnumbers=3D\"one two three\"','-Dspark.yarn.acces= s.namenodes=3Dhdfs://10.10.10.10:8020','-Dspark.logConf=3Dtrue','-Dspark.se= rializer=3Dorg.apache.spark.serializer.KryoSerializer','-Dspark.executor.id= =3D<driver>','-Dspark.jars=3Dfile:/home/testuser/spark/lib/spark-exam= ples-1.3.1-hadoop2.6.0.jar','-Dspark.executor.instances=3D1','-Dspark.app.n= ame=3DSpark Pi','-Dspark.eventlog.dir=3Dhdfs://10.10.10.10:8020/user/testuser/spark/ev= entlog','-Dspark.tachyonStore.folderName=3Dspark-9481ab9a-85db-4bfe-9d2f-ce= b45f31d37c','-Dspark.executor.cores=3D1','-Dspark.eventlog.enabled=3Dtrue',= '-Dspark.authenticate=3Dtrue',-Dspark.yarn.app.container.log.dir=3D<LOG_= DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'ip-10-10-127-1= 0.ec2.internal:53747',--executor-memory,1024m,--executor-cores,1,--num-exec= utors ,1,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr

2015-05-12 21:29:21,982 INFO org.apache.hadoop.yarn.= server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken fo= r ApplicationAttempt: appattempt_1431466075462_0001_000001

2015-05-12 21:29:21,985 INFO org.apache.hadoop.yarn.= server.resourcemanager.security.AMRMTokenSecretManager: Creating password f= or appattempt_1431466075462_0001_000001

2015-05-12 21:29:22,710 INFO org.apache.hadoop.yarn.= server.resourcemanager.amlauncher.AMLauncher: Done launching container Cont= ainer: [ContainerId: container_1431466075462_0001_01_000001, NodeId: ip-10-= 10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>= , Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9= 032 }, ] for AM appattempt_1431466075462_0001_000001

2015-05-12 21:29:22,710 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_143146607= 5462_0001_000001 State change from ALLOCATED to LAUNCHED

2015-05-12 21:29:22,915 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmcontainer.RMContainerImpl: container_1431466075462= _0001_01_000001 Container Transitioned from ACQUIRED to RUNNING<= /p>

2015-05-12 21:37:55,432 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.AbstractYarnScheduler: Release request cac= he is cleaned up

2015-05-12 21:41:15,504 INFO org.apache.hadoop.yarn.= util.AbstractLivelinessMonitor: Expired:appattempt_1431466075462_0001_00000= 1 Timed out after 600 secs

2015-05-12 21:41:15,505 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application= attempt appattempt_1431466075462_0001_000001 with final state: FAILED, and= exit status: -1000

2015-05-12 21:41:15,506 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_143146607= 5462_0001_000001 State change from LAUNCHED to FINAL_SAVING

2015-05-12 21:41:15,506 INFO org.apache.hadoop.yarn.= server.resourcemanager.ApplicationMasterService: Unregistering app attempt = : appattempt_1431466075462_0001_000001

2015-05-12 21:41:15,507 INFO org.apache.hadoop.yarn.= server.resourcemanager.security.AMRMTokenSecretManager: Application finishe= d, removing password for appattempt_1431466075462_0001_000001

2015-05-12 21:41:15,507 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_143146607= 5462_0001_000001 State change from FINAL_SAVING to FAILED

2015-05-12 21:41:15,508 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 1.= The max attempts is 2

2015-05-12 21:41:15,508 INFO org.apache.hadoop.yarn.= server.resourcemanager.ApplicationMasterService: Registering app attempt : = appattempt_1431466075462_0001_000002

2015-05-12 21:41:15,508 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_143146607= 5462_0001_000002 State change from NEW to SUBMITTED

2015-05-12 21:41:15,508 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.CapacityScheduler: Application At= tempt appattempt_1431466075462_0001_000001 is done. finalState=3DFAILED

2015-05-12 21:41:15,510 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmcontainer.RMContainerImpl: container_1431466075462= _0001_01_000001 Container Transitioned from RUNNING to KILLED

2015-05-12 21:41:15,510 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed co= ntainer: container_1431466075462_0001_01_000001 in state: KILLED event:KILL=

2015-05-12 21:41:15,510 INFO org.apache.hadoop.yarn.= server.resourcemanager.RMAuditLogger: USER=3Dtestuser OPERATION=3DAM Releas= ed Container TARGET=3DSchedulerApp     RESULT=3DSUCCESS=   APPID=3Dapplication_1431466075462_0001     =   CONTAINERID=3Dcontainer_1431466075462_0001_01_000001

2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.SchedulerNode: Released container containe= r_1431466075462_0001_01_000001 of capacity <memory:1024, vCores:1> on= host ip-10-10-128-10.ec2.internal:9032, which currently has 0 containers, <memory:0, vCores:0> used and <= memory:8192, vCores:8> available, release resources=3Dtrue

2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.LeafQueue: default used=3D<mem= ory:0, vCores:0> numContainers=3D0 user=3Dtestuser user-resources=3D<= memory:0, vCores:0>

2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer con= tainer=3DContainer: [ContainerId: container_1431466075462_0001_01_000001, N= odeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1= 024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, servic= e: 10.10.128.10:9032 }, ] queue=3Ddefault: capacity=3D1.0, absoluteCapacity= =3D1.0, usedResources=3D<memory:0, vCores:0>, usedCapacity=3D0.0, absoluteUsedCapacity=3D0.0, numApps=3D1, numContainers= =3D0 cluster=3D<memory:8192, vCores:8>

2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer q= ueue=3Droot usedCapacity=3D0.0 absoluteUsedCapacity=3D0.0 used=3D<memory= :0, vCores:0> cluster=3D<memory:8192, vCores:8>

2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed= queue: root.default stats: default: capacity=3D1.0, absoluteCapacity=3D1.0= , usedResources=3D<memory:0, vCores:0>, usedCapacity=3D0.0, absoluteUsedCapacity=3D0.0, numApps=3D1, numContainers= =3D0

2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.CapacityScheduler: Application at= tempt appattempt_1431466075462_0001_000001 released container container_143= 1466075462_0001_01_000001 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=3D0 available=3D8192 u= sed=3D0 with event: KILL

2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.AppSchedulingInfo: Application application= _1431466075462_0001 requests cleared

2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - = appId: application_1431466075462_0001 user: testuser queue: default #user-p= ending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0<= /p>

2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.LeafQueue: Application applicatio= n_1431466075462_0001 from user: testuser activated in queue: default

2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.LeafQueue: Application added - ap= pId: application_1431466075462_0001 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$= User@5fe8d552, leaf-queue: default #user-pending-applications: 0 #user-= active-applications: 1 #queue-pending-applications: 0 #queue-active-applica= tions: 1

2015-05-12 21:41:15,511 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Applicat= ion Attempt appattempt_1431466075462_0001_000002 to scheduler from user tes= tuser in queue default

2015-05-12 21:41:15,512 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_143146607= 5462_0001_000002 State change from SUBMITTED to SCHEDULED

2015-05-12 21:41:15,512 INFO org.apache.hadoop.yarn.= server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_14= 31466075462_0001_000001

2015-05-12 21:41:15,585 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container= completed...

2015-05-12 21:41:15,585 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmcontainer.RMContainerImpl: container_1431466075462= _0001_02_000001 Container Transitioned from NEW to ALLOCATED

2015-05-12 21:41:15,585 INFO org.apache.hadoop.yarn.= server.resourcemanager.RMAuditLogger: USER=3Dtestuser OPERATION=3DAM Alloca= ted Container        TARGET=3DSchedulerA= pp     RESULT=3DSUCCESS  APPID=3Dapplication_14314= 66075462_0001       CONTAINERID=3Dcontainer_1= 431466075462_0001_02_000001

2015-05-12 21:41:15,585 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.SchedulerNode: Assigned container containe= r_1431466075462_0001_02_000001 of capacity <memory:1024, vCores:1> on= host ip-10-10-128-10.ec2.internal:9032, which has 1 containers, <memory:1024, vCores:1> used and <memory:= 7168, vCores:7> available after allocation

2015-05-12 21:41:15,586 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer appl= ication attempt=3Dappattempt_1431466075462_0001_000002 container=3DContaine= r: [ContainerId: container_1431466075462_0001_02_000001, NodeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-1= 0.ec2.internal:8090, Resource: <memory:1024, vCores:1>, Priority: 0, = Token: null, ] queue=3Ddefault: capacity=3D1.0, absoluteCapacity=3D1.0, use= dResources=3D<memory:0, vCores:0>, usedCapacity=3D0.0, absoluteUsedCapacity=3D0.0, numApps=3D1, numContainers=3D0 clusterResource= =3D<memory:8192, vCores:8>

2015-05-12 21:41:15,586 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned = queue: root.default stats: default: capacity=3D1.0, absoluteCapacity=3D1.0,= usedResources=3D<memory:1024, vCores:1>, usedCapacity=3D0.125, absoluteUsedCapacity=3D0.125, numApps=3D1, numContai= ners=3D1

2015-05-12 21:41:15,586 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer qu= eue=3Droot usedCapacity=3D0.125 absoluteUsedCapacity=3D0.125 used=3D<mem= ory:1024, vCores:1> cluster=3D<memory:8192, vCores:8>

2015-05-12 21:41:15,587 INFO org.apache.hadoop.yarn.= server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken f= or nodeId : ip-10-10-128-10.ec2.internal:9032 for container : container_143= 1466075462_0001_02_000001

2015-05-12 21:41:15,588 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmcontainer.RMContainerImpl: container_1431466075462= _0001_02_000001 Container Transitioned from ALLOCATED to ACQUIRED

2015-05-12 21:41:15,588 INFO org.apache.hadoop.yarn.= server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set fo= r appattempt_1431466075462_0001_000002

2015-05-12 21:41:15,588 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: App= Id: application_1431466075462_0001 AttemptId: appattempt_1431466075462_0001= _000002 MasterContainer: Container: [ContainerId: container_1431466075462_0001_02_000001, NodeId: ip-10-10-128= -10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, = Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { kind: = ContainerToken, service: 10.10.128.10:9032 }, ]

2015-05-12 21:41:15,588 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_143146607= 5462_0001_000002 State change from SCHEDULED to ALLOCATED_SAVING=

2015-05-12 21:41:15,588 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_143146607= 5462_0001_000002 State change from ALLOCATED_SAVING to ALLOCATED=

2015-05-12 21:41:15,589 INFO org.apache.hadoop.yarn.= server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_14= 31466075462_0001_000002

2015-05-12 21:41:15,590 INFO org.apache.hadoop.yarn.= server.resourcemanager.amlauncher.AMLauncher: Setting up container Containe= r: [ContainerId: container_1431466075462_0001_02_000001, NodeId: ip-10-10-1= 28-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>= , Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9= 032 }, ] for AM appattempt_1431466075462_0001_000002

2015-05-12 21:41:15,590 INFO org.apache.hadoop.yarn.= server.resourcemanager.amlauncher.AMLauncher: Command to launch container c= ontainer_1431466075462_0001_02_000001 : {{JAVA_HOME}}/bin/java,-server,-Xmx= 512m,-Djava.io.tmpdir=3D{{PWD}}/tmp,'-Dspark.driver.host=3Dip-10-10-127-10.= ec2.internal','-Dspark.driver.port=3D53747','-Dspark.driver.appUIAddress=3D= http://ip-10-10-127-10.ec2.internal:4040','-Dspark.master=3Dyarn-client','-= Dspark.fileserver.uri=3Dhttp://10.10.127.10:37326','-Dspark.executor.extraJ= avaOptions=3D-XX:+PrintGCDetails -Dkey=3Dvalue -Dnumbers=3D\"one two three\"','-Dspark.yarn.acces= s.namenodes=3Dhdfs://10.10.10.10:8020','-Dspark.logConf=3Dtrue','-Dspark.se= rializer=3Dorg.apache.spark.serializer.KryoSerializer','-Dspark.executor.id= =3D<driver>','-Dspark.jars=3Dfile:/home/testuser/spark/lib/spark-exam= ples-1.3.1-hadoop2.6.0.jar','-Dspark.executor.instances=3D1','-Dspark.app.n= ame=3DSpark Pi','-Dspark.eventlog.dir=3Dhdfs://10.10.10.10:8020/user/testuser/spark/ev= entlog','-Dspark.tachyonStore.folderName=3Dspark-9481ab9a-85db-4bfe-9d2f-ce= b45f31d37c','-Dspark.executor.cores=3D1','-Dspark.eventlog.enabled=3Dtrue',= '-Dspark.authenticate=3Dtrue',-Dspark.yarn.app.container.log.dir=3D<LOG_= DIR>,org.apache.spark.deploy.yarn.ExecutorLauncher,--arg,'ip-10-10-127-1= 0.ec2.internal:53747',--executor-memory,1024m,--executor-cores,1,--num-exec= utors ,1,1>,<LOG_DIR>/stdout,2>,<LOG_DIR>/stderr

2015-05-12 21:41:15,590 INFO org.apache.hadoop.yarn.= server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken fo= r ApplicationAttempt: appattempt_1431466075462_0001_000002

2015-05-12 21:41:15,590 INFO org.apache.hadoop.yarn.= server.resourcemanager.security.AMRMTokenSecretManager: Creating password f= or appattempt_1431466075462_0001_000002

2015-05-12 21:41:15,607 INFO org.apache.hadoop.yarn.= server.resourcemanager.amlauncher.AMLauncher: Done launching container Cont= ainer: [ContainerId: container_1431466075462_0001_02_000001, NodeId: ip-10-= 10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1024, vCores:1>= , Priority: 0, Token: Token { kind: ContainerToken, service: 10.10.128.10:9= 032 }, ] for AM appattempt_1431466075462_0001_000002

2015-05-12 21:41:15,607 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_143146607= 5462_0001_000002 State change from ALLOCATED to LAUNCHED

2015-05-12 21:41:16,590 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container= completed...

2015-05-12 21:41:16,590 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmcontainer.RMContainerImpl: container_1431466075462= _0001_02_000001 Container Transitioned from ACQUIRED to COMPLETED

2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed co= ntainer: container_1431466075462_0001_02_000001 in state: COMPLETED event:F= INISHED

2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.= server.resourcemanager.RMAuditLogger: USER=3Dtestuser OPERATION=3DAM Releas= ed Container TARGET=3DSchedulerApp     RESULT=3DSUCCESS=   APPID=3Dapplication_1431466075462_0001     =   CONTAINERID=3Dcontainer_1431466075462_0001_02_000001

2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.SchedulerNode: Released container containe= r_1431466075462_0001_02_000001 of capacity <memory:1024, vCores:1> on= host ip-10-10-128-10.ec2.internal:9032, which currently has 0 containers, <memory:0, vCores:0> used and <= memory:8192, vCores:8> available, release resources=3Dtrue

2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.LeafQueue: default used=3D<mem= ory:0, vCores:0> numContainers=3D0 user=3Dtestuser user-resources=3D<= memory:0, vCores:0>

2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer con= tainer=3DContainer: [ContainerId: container_1431466075462_0001_02_000001, N= odeId: ip-10-10-128-10.ec2.internal:9032, NodeHttpAddress: ip-10-10-128-10.ec2.internal:8090, Resource: <memory:1= 024, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, servic= e: 10.10.128.10:9032 }, ] queue=3Ddefault: capacity=3D1.0, absoluteCapacity= =3D1.0, usedResources=3D<memory:0, vCores:0>, usedCapacity=3D0.0, absoluteUsedCapacity=3D0.0, numApps=3D1, numContainers= =3D0 cluster=3D<memory:8192, vCores:8>

2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer q= ueue=3Droot usedCapacity=3D0.0 absoluteUsedCapacity=3D0.0 used=3D<memory= :0, vCores:0> cluster=3D<memory:8192, vCores:8>

2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed= queue: root.default stats: default: capacity=3D1.0, absoluteCapacity=3D1.0= , usedResources=3D<memory:0, vCores:0>, usedCapacity=3D0.0, absoluteUsedCapacity=3D0.0, numApps=3D1, numContainers= =3D0

2015-05-12 21:41:16,591 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.CapacityScheduler: Application at= tempt appattempt_1431466075462_0001_000002 released container container_143= 1466075462_0001_02_000001 on node: host: ip-10-10-128-10.ec2.internal:9032 #containers=3D0 available=3D8192 u= sed=3D0 with event: FINISHED

2015-05-12 21:41:16,600 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application= attempt appattempt_1431466075462_0001_000002 with final state: FAILED, and= exit status: -1000

2015-05-12 21:41:16,601 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_143146607= 5462_0001_000002 State change from LAUNCHED to FINAL_SAVING

2015-05-12 21:41:16,601 INFO org.apache.hadoop.yarn.= server.resourcemanager.ApplicationMasterService: Unregistering app attempt = : appattempt_1431466075462_0001_000002

2015-05-12 21:41:16,601 INFO org.apache.hadoop.yarn.= server.resourcemanager.security.AMRMTokenSecretManager: Application finishe= d, removing password for appattempt_1431466075462_0001_000002

2015-05-12 21:41:16,601 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_143146607= 5462_0001_000002 State change from FINAL_SAVING to FAILED

2015-05-12 21:41:16,601 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 2.= The max attempts is 2

2015-05-12 21:41:16,602 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.RMAppImpl: Updating application application_14= 31466075462_0001 with final state: FAILED

2015-05-12 21:41:16,602 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.RMAppImpl: application_1431466075462_0001 Stat= e change from ACCEPTED to FINAL_SAVING

2015-05-12 21:41:16,602 INFO org.apache.hadoop.yarn.= server.resourcemanager.recovery.RMStateStore: Updating info for app: applic= ation_1431466075462_0001

2015-05-12 21:41:16,603 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.CapacityScheduler: Application At= tempt appattempt_1431466075462_0001_000002 is done. finalState=3DFAILED

2015-05-12 21:41:16,603 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.AppSchedulingInfo: Application application= _1431466075462_0001 requests cleared

2015-05-12 21:41:16,603 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - = appId: application_1431466075462_0001 user: testuser queue: default #user-p= ending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0<= /p>

2015-05-12 21:41:16,603 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.RMAppImpl: Application application_14314660754= 62_0001 failed 2 times due to AM Container for appattempt_1431466075462_000= 1_000002 exited with  exitCode: -1000

For more detailed output, check application tracking= page:https://ip-10-10-127-10.ec2.internal:8090/proxy/application_143146607= 5462_0001/Then, click on links to logs of each attempt.

Diagnostics: Application application_1431466075462_0= 001 initialization failed (exitCode=3D255) with output: main : command prov= ided 0

main : user is testuser

main : requested yarn user is testuser

Path /tmp/hadoop-yarn/nm-local-dir/usercache/testuse= r/appcache/application_1431466075462_0001 does not have desired permission.=

Did not create any app directories

 

Failing this attempt. Failing the application.<= /o:p>

2015-05-12 21:41:16,604 INFO org.apache.hadoop.yarn.= server.resourcemanager.rmapp.RMAppImpl: application_1431466075462_0001 Stat= e change from FINAL_SAVING to FAILED

2015-05-12 21:41:16,605 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.ParentQueue: Application removed = - appId: application_1431466075462_0001 user: testuser leaf-queue of parent= : root #applications: 0

2015-05-12 21:41:16,605 WARN org.apache.hadoop.yarn.= server.resourcemanager.RMAuditLogger: USER=3Dtestuser OPERATION=3DApplicati= on Finished - Failed TARGET=3DRMAppManager     RESULT= =3DFAILURE  DESCRIPTION=3DApp failed with state: FAILED  PERMISSI= ONS=3DApplication application_1431466075462_0001 failed 2 times due to AM Container for appa= ttempt_1431466075462_0001_000002 exited with  exitCode: -1000

For more detailed output, check application tracking= page:https://ip-10-10-127-10.ec2.internal:8090/proxy/application_143146607= 5462_0001/Then, click on links to logs of each attempt.

Diagnostics: Application application_1431466075462_0= 001 initialization failed (exitCode=3D255) with output: main : command prov= ided 0

main : user is testuser

main : requested yarn user is testuser

Path /tmp/hadoop-yarn/nm-local-dir/usercache/testuse= r/appcache/application_1431466075462_0001 does not have desired permission.=

Did not create any app directories

 

Failing this attempt. Failing the application. = APPID=3Dapplication_1431466075462_0001

2015-05-12 21:41:16,607 INFO org.apache.hadoop.yarn.= server.resourcemanager.RMAppManager$ApplicationSummary: appId=3Dapplication= _1431466075462_0001,name=3DSpark Pi,user=3Dtestuser,queue=3Ddefault,state= =3DFAILED,trackingUrl=3Dhttps://ip-10-10-127-10.ec2.internal:8090/cluster/a= pp/application_1431466075462_0001,appMasterHost=3DN/A,startTime=3D143146615= 9129,finishTime=3D1431466876602,finalStatus=3DFAILED

2015-05-12 21:41:16,629 INFO org.apache.hadoop.hdfs.= DFSClient: Cancelling HDFS_DELEGATION_TOKEN token 16 for testuser on 10.10.= 10.10:8020

2015-05-12 21:41:17,593 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container= completed...

2015-05-12 21:41:17,594 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container= completed...

2015-05-12 21:41:18,597 INFO org.apache.hadoop.yarn.= server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container= completed...

[yarn@ip-10-10-127-10 hadoop]$

 

Keith Nance

Sr. S= oftware Engineer

*Email= : knance@smartronix.com

(    Cell: 808-343-0071=

= 3D"cid:image002.jpg@01CA58DB.D44B0=

<= a href=3D"http://www.smartronix.com">www.smartronix.com<= span style=3D"color:black">

 

--_000_0EE80F6F7A98A64EBD18F2BE839C911567800F0Dszxeml512mbschi_-- --_004_0EE80F6F7A98A64EBD18F2BE839C911567800F0Dszxeml512mbschi_ Content-Type: image/jpeg; name="image001.jpg" Content-Description: image001.jpg Content-Disposition: inline; filename="image001.jpg"; size=2977; creation-date="Wed, 13 May 2015 04:13:57 GMT"; modification-date="Wed, 13 May 2015 04:13:57 GMT" Content-ID: Content-Transfer-Encoding: base64 /9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAoHBwgHBgoICAgLCgoLDhgQDg0NDh0VFhEYIx8lJCIf IiEmKzcvJik0KSEiMEExNDk7Pj4+JS5ESUM8SDc9Pjv/2wBDAQoLCw4NDhwQEBw7KCIoOzs7Ozs7 Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozv/wAARCAA9ASQDASIA AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3 ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3 uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD2aiii gAooooAKKKKACiikNABmjNVZrifO22tvMPdnbao/Hr+lZt1petX+4S62bOM/wWcIDD/gbZ/kKYGx LNFCm+WVI19WIArLm8WeHoH2S61ZK3oZlrk9Y+E8GoRO8et6hJc4yDdyCRSfccYrxx4vJkeNlUFG KnHTIrWNNS6kuTXQ+nbPUbLUYvNsruG4T+9E4YfpVrFeEfClLv8A4TeIWxZYhE7XAXptxgZ/4Fiv eKiceV2GndBRRRUDCiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAj kkSJGeRlVV5LMcAUyC6t7pN9vPHMvTdG4YfpXJfEnSdR1fSrOPT1SbyrgSS2jSbPtCj+H3rI8D3G mW/imayOgXWhapJBua38wmGRQeoHrVKN1cV9T0r8aM+9eSN438Wf2JNra3Nj5FtqBtTCYcGTnHJz xXR6Hr2up42m0HV57a4Q2n2lGhj8vZz93360+RhzI7jNGa8r1HxxrlpI15BqtleQpeiB7e3s3MSq Wxjzj/F7Vu+Jtb8QReMNO0PRZLaP7ZbmRmnTO3BOT+VHIw5kdtketHHrXm3iVNcf4oaTDa3tvGWh ka13xEiNcDeG/vE449K0bvXPEU/xFfQNOe2S0iijnkaSPJVON2Pc9PajlC53FLmvO9Q8T+Ib7Utf GkXNta2+hKCY5YdxnIBJ57Dinap421OTR/DN9pYhik1eYRyJMu5Rx+nNHIw5jr9dk1FNGuTpUaPd lCI/McKqZ/iJPp1rxeH4ZeJ7qLzolsplb/lol0pBPfkV32h67q91ceJNG1iWC5k06MlZIo9gYFTw RVLQb7UbX4SJqOkG1tJrbzJCrR5jZQxyOTwaqN47EuzOh8D+EIvCml7ZCsl9Php5AP8Ax0ewrqq5 LwT4gv8AxRFcarL5UVkSI4YF5YMPvMx9z0FdZ2qJXvqWttAz70tZ2tTS2uiX1xA22SK3kdGxnBCk ir4qQHUUhrIi8RWL3N/A5aJrHO/f/EABkr69QPxHrTtcDYorJXXIG0yzv/KkCXcscSqcZUu20Zpu pa9Fp16tq0W9zH5mWlSMYzj+IjJoswubFFZEuuRQaMmpSQMFd1QJvXqX2g7s4xznOelP03WU1KWR Fh2+WoYsJo5ByTx8rH0osxXRqUVjf8JBCdGtNTS3ldbt0SOIYDZc4HXipYdWZ9TSwuLGa3kljaRC zIwIUgH7pPqKLMdzUorDXxRYsmoZ8xZbDzS0TDBkWMkEr6jithHDxq4H3gDg0WsBJRWVd63b2Wow 2UiSEvjfIMbYtxITcf8AaIIFGq61Fpc9vE8e9p1dl/eKigLjPLEf3hRZhc1aKyBrsX9jT6m8LCOH OVWRGzj0IJH60un62moXX2dIMHYX3LPG44I/usSOtFmF0a1FZDa/Aujz6l5UnlwytEVGMkh9n86X UdaTTryC0+ztLJMjOPnVAApHdiPWizFdGtRWKPEMB0S61UwShLVnV48qTlTg4wcGrOn6m+oEkWpS IA/vBNG4J9PlJosxmjRWDH4lV7NL59PuI7N32+eShC/NtyQDnGa0rp7lZbdYNm15NsmULYG0nsRj pj8aLAXKK5Q674hwuNJI45/dP1op8orl7xH4WsPEscH2p54ZrZ98M8D7HjPqDUGi+DbTSNRk1OS8 vL+/dPL+0XUm5lX0HpXSUUru1h2OR/4V5pf9hTaR5915E159rY7hu356dOlaS+F7MeI21zzJTO1t 9mKZGzb/AI1t596Wi7CxxC/DHTFsmsf7Q1A2gn8+KAyjbG2c8cc1uz+HbafxHaa60kouLSFoUUEb Sp9fetnNIzKilmOAOpNPmbCxz2veEYNd1K01EXt1ZXVmpWOS2cA7T1FTjQrG08R3HiRp3WaW3EMm 9gIwoxz+lYviH4m6HopMFq51G76COAgqD7t/hmuWbSfHHxCYPqL/ANl6aTxEwKgj/d6sfc8VSTtq 9CW10G+Idb8Fz63ePDe6sDegJdjTwvlXOO2Tz+I612VtoVh4i0/SJGsb7TYNLlElrDJtQnA4yOeP 1qfw74B0Tw5tlhg+0XQ/5eJ/mP4f3fwrpqJSXQaXcwrbwtZ22p6tqCSzGXVVCygkYXjHFVo/A+nx eH7XQ/tFy1lBN5rJuH73ndtbj7ue1dRRUXYzC0nwvZ6Jq17fWMksa3pDSW2790GH8QHY1udqWii9 wKGsW8l1o17bQjdJNbvGo6clSKuinUUgENZa6DYC4aeSIyyG6N0pfnY+0Lx7fKDj15rVooAy/wCx bYafa2W+Ty7SVJUORklW3AHjpSaloo1K4WZr24h2rt2xiMr9cMp5rVop3YWMlNChi0ZdMjubiOMO W8xdu8ksWOflx1PpRpmhQ6bcyXK3E07yIEPmBBgA542qK1qKLsVkY50C2/sW30tZp0jtmRopFI3q VOQeRjr7U620QW+oJfS393dzJG0aecUwoJBP3VHoK1qKLsehi3XhrT7zT5bOcSESSySiTIDozkls HHA5Ix6da1o0EcaoOQqgVJRRcDGm8Nadc/ajcq80l0xLyM3zDsMY44HSpL3RRexwI9/coYF27l2H f05bcpGeK1aKLsVkZNvoUNtpk9jHc3GJ3LvLlQ+Tj0XHb0ptj4ehsb5bz7XcTSIjIokEYABxn7qj 0FbFFF2OyMB/C0LxSwf2jerbSymV7dTHsyW3EfczjPvV680a0v7xLm6TzNkTxeWwBUq2M59+K0aK LsVkZJ0G3OiSaQZpjbyKVySu5VPYHHP45qSw0x7GUt/aN1cKVx5cvl7R7/Ko5rSoouOyMCLwrBHD HbNqF9LaxuHFs7JsPzbsHCgkZ963qWii4LQKKKKQBSGlooA5W68TDQNUuYtZt7hLSRw9vdxwtJHt wPlbGSpBqGX4l+G1GIJrq7f+5Baux/lXXMoK4IyKYI0TkIoPsKrQLM4qbxh4k1P93oHhO5Xd0n1A eWo/CqreCPE/iKTd4o8QFID/AMulj8q/Qnp/OvQqdRzW2E0c/ofgrQPD6L9hsI/NHPnSDc+fXJ6f hW9TqKV7jCiiikAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRR RQAUUUUAFFFFAH//2Q== --_004_0EE80F6F7A98A64EBD18F2BE839C911567800F0Dszxeml512mbschi_--