Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B9FEF17771 for ; Thu, 19 Feb 2015 01:22:25 +0000 (UTC) Received: (qmail 63165 invoked by uid 500); 19 Feb 2015 01:22:09 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 63052 invoked by uid 500); 19 Feb 2015 01:22:09 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 63042 invoked by uid 99); 19 Feb 2015 01:22:09 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Feb 2015 01:22:09 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_NONE,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of roland.depratti@cox.net designates 68.230.241.216 as permitted sender) Received: from [68.230.241.216] (HELO eastrmfepo201.cox.net) (68.230.241.216) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Feb 2015 01:21:42 +0000 Received: from eastrmimpo209 ([68.230.241.224]) by eastrmfepo201.cox.net (InterMail vM.8.01.05.15 201-2260-151-145-20131218) with ESMTP id <20150219011939.DLSE25062.eastrmfepo201.cox.net@eastrmimpo209> for ; Wed, 18 Feb 2015 20:19:39 -0500 Received: from MainPC ([72.195.141.1]) by eastrmimpo209 with cox id u1Ke1p002020h34011KeWV; Wed, 18 Feb 2015 20:19:38 -0500 X-CT-Class: Clean X-CT-Score: 0.00 X-CT-RefID: str=0001.0A020202.54E53A2B.00C6,ss=1,re=0.000,fgs=0 X-CT-Spam: 0 X-Authority-Analysis: v=2.0 cv=H/cFNZki c=1 sm=1 a=w7pKjAbGsLCWhSjHPtj8rw==:17 a=kviXuzpPAAAA:8 a=DwiIph00AAAA:8 a=_Ii05PlEAAAA:8 a=0KWPWDwzTa65Ezf40yUA:9 a=QEXdDO2ut3YA:10 a=4wWMP00vxi5rJS25:21 a=pLEgOSi4u0ug88lx:21 a=yMhMjlubAAAA:8 a=SSmOFEACAAAA:8 a=DRjlJ1ETu98mZbvICZgA:9 a=gKO2Hq4RSVkA:10 a=UiCQ7L4-1S4A:10 a=hTZeC7Yk6K0A:10 a=frz4AuCg-hUA:10 a=5fTf-8U2YoGPUScF:21 a=5OvmusdH_5-cdkgA:21 a=w7pKjAbGsLCWhSjHPtj8rw==:117 X-CM-Score: 0.00 Authentication-Results: cox.net; auth=pass (LOGIN) smtp.auth=roland.depratti@cox.net From: "Roland DePratti" To: Subject: Yarn AM is abending job when submitting a remote job to cluster Date: Wed, 18 Feb 2015 20:19:45 -0500 Message-ID: <006201d04be2$256f5610$704e0230$@cox.net> MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_NextPart_000_0063_01D04BB8.3CA07A00" X-Mailer: Microsoft Outlook 14.0 Thread-Index: AdBL4fKqr56hZt3UT1mOO8TSvePYFg== Content-Language: en-us X-Virus-Checked: Checked by ClamAV on apache.org This is a multipart message in MIME format. ------=_NextPart_000_0063_01D04BB8.3CA07A00 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable I have been searching for a handle on a problem without very little = clues. Any help pointing me to the right direction will be huge. I have not received any input form the Cloudera google groups. Perhaps = this is more Yarn based and I am hoping I have more luck here. Any help is greatly appreciated. =20 I am running a Hadoop cluster using CDH5.3. I also have a client machine = with a standalone one node setup (VM). =20 All environments are running CentOS 6.6. =20 I have submitted some Java mapreduce jobs locally on both the cluster = and the standalone environment with successfully completions. =20 =20 I can submit a remote HDFS job from client to cluster using -conf = hadoop-cluster.xml (see below) and get data back from the cluster with = no problem. When submitted remotely the mapreduce jobs remotely, I get an AM error: =20 AM fails the job with the error:=20 SecretManager$InvalidToken: = appattempt_1424003606313_0001_000002 not found in AMRMTokenSecretManager I searched /var/log/secure on the client and cluster with no unusual = messages. Here is the contents of hadoop-cluster.xml: fs.defaultFS hdfs://mycluser:8020 mapreduce.jobtracker.address hdfs://mycluster:8032 yarn.resourcemanager.address hdfs://mycluster:8032 Here is the output from the job log on the cluster: =20 2015-02-15 07:51:06,544 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for = application appattempt_1424003606313_0001_000002 2015-02-15 07:51:06,949 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.require.client.cert; Ignoring. 2015-02-15 07:51:06,952 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.retry.interval; = Ignoring. 2015-02-15 07:51:06,952 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.client.conf; Ignoring. 2015-02-15 07:51:06,954 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.keystores.factory.class; Ignoring. 2015-02-15 07:51:06,957 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.server.conf; Ignoring. 2015-02-15 07:51:06,973 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 2015-02-15 07:51:07,241 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens: 2015-02-15 07:51:07,241 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, = Service: , Ident: = (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@33be1aa0) 2015-02-15 07:51:07,332 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred = newApiCommitter. 2015-02-15 07:51:07,627 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.require.client.cert; Ignoring. 2015-02-15 07:51:07,632 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.retry.interval; = Ignoring. 2015-02-15 07:51:07,632 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.client.conf; Ignoring. 2015-02-15 07:51:07,639 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.keystores.factory.class; Ignoring. 2015-02-15 07:51:07,645 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.server.conf; Ignoring. 2015-02-15 07:51:07,663 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 2015-02-15 07:51:08,237 WARN [main] = org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop = library for your platform... using builtin-java classes where applicable 2015-02-15 07:51:08,429 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in = config null 2015-02-15 07:51:08,499 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is = org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2015-02-15 07:51:08,526 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.jobhistory.EventType for class = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler 2015-02-15 07:51:08,527 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher 2015-02-15 07:51:08,561 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher 2015-02-15 07:51:08,562 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher= 2015-02-15 07:51:08,566 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class = org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler 2015-02-15 07:51:08,568 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher 2015-02-15 07:51:08,568 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter 2015-02-15 07:51:08,570 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType = for class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter 2015-02-15 07:51:08,599 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Recovery is enabled. = Will try to recover from previous life on best effort basis. 2015-02-15 07:51:08,642 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Previous history file is = at = hdfs://mycluster.com:8020/user/cloudera/.staging/job_1424003606313_0001/j= ob_1424003606313_0001_1.jhist = =20 2015-02-15 = 07:51:09,147 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Read completed tasks = from history 0 2015-02-15 07:51:09,193 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler 2015-02-15 07:51:09,222 INFO [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from = hadoop-metrics2.properties 2015-02-15 07:51:09,277 INFO [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot = period at 10 second(s). 2015-02-15 07:51:09,277 INFO [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics = system started 2015-02-15 07:51:09,286 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token = for job_1424003606313_0001 to jobTokenSecretManager 2015-02-15 07:51:09,306 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing = job_1424003606313_0001 because: not enabled; too much RAM; 2015-02-15 07:51:09,324 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job = job_1424003606313_0001 =3D 5343207. Number of splits =3D 5 2015-02-15 07:51:09,325 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces = for job job_1424003606313_0001 =3D 1 2015-02-15 07:51:09,325 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: = job_1424003606313_0001Job Transitioned from NEW to INITED 2015-02-15 07:51:09,327 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching = normal, non-uberized, multi-container job job_1424003606313_0001. 2015-02-15 07:51:09,387 INFO [main] = org.apache.hadoop.ipc.CallQueueManager: Using callQueue class = java.util.concurrent.LinkedBlockingQueue 2015-02-15 07:51:09,398 INFO [Socket Reader #1 for port 56348] = org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 56348 2015-02-15 07:51:09,418 INFO [main] = org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding = protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the = server 2015-02-15 07:51:09,418 INFO [IPC Server Responder] = org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2015-02-15 07:51:09,419 INFO [main] = org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated = MRClientService at mycluster/mycluster:56348 2015-02-15 07:51:09,425 INFO [IPC Server listener on 56348] = org.apache.hadoop.ipc.Server: IPC Server listener on 56348: starting 2015-02-15 07:51:09,492 INFO [main] org.mortbay.log: Logging to = org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via = org.mortbay.log.Slf4jLog 2015-02-15 07:51:09,497 INFO [main] = org.apache.hadoop.http.HttpRequestLog: Http request log for = http.requests.mapreduce is not defined 2015-02-15 07:51:09,509 INFO [main] org.apache.hadoop.http.HttpServer2: = Added global filter 'safety' = (class=3Dorg.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2015-02-15 07:51:09,514 INFO [main] org.apache.hadoop.http.HttpServer2: = Added filter AM_PROXY_FILTER = (class=3Dorg.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to = context mapreduce 2015-02-15 07:51:09,514 INFO [main] org.apache.hadoop.http.HttpServer2: = Added filter AM_PROXY_FILTER = (class=3Dorg.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to = context static 2015-02-15 07:51:09,518 INFO [main] org.apache.hadoop.http.HttpServer2: = adding path spec: /mapreduce/* 2015-02-15 07:51:09,518 INFO [main] org.apache.hadoop.http.HttpServer2: = adding path spec: /ws/* 2015-02-15 07:51:09,529 INFO [main] org.apache.hadoop.http.HttpServer2: = Jetty bound to port 34473 2015-02-15 07:51:09,529 INFO [main] org.mortbay.log: = jetty-6.1.26.cloudera.4 2015-02-15 07:51:09,561 INFO [main] org.mortbay.log: Extract = jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yar= n-common-2.5.0-cdh5.3.0.jar!/webapps/mapreduce to = /tmp/Jetty_0_0_0_0_34473_mapreduce____.ezh3w6/webapp 2015-02-15 07:51:09,932 INFO [main] org.mortbay.log: Started = HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:34473 2015-02-15 07:51:09,932 INFO [main] = org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce started at = 34473 2015-02-15 07:51:10,425 INFO [main] = org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules 2015-02-15 07:51:10,430 INFO [main] = org.apache.hadoop.ipc.CallQueueManager: Using callQueue class = java.util.concurrent.LinkedBlockingQueue 2015-02-15 07:51:10,431 INFO [Socket Reader #1 for port 41190] = org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 41190 2015-02-15 07:51:10,438 INFO [IPC Server Responder] = org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2015-02-15 07:51:10,438 INFO [IPC Server listener on 41190] = org.apache.hadoop.ipc.Server: IPC Server listener on 41190: starting 2015-02-15 07:51:10,459 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = nodeBlacklistingEnabled:true 2015-02-15 07:51:10,459 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = maxTaskFailuresPerNode is 3 2015-02-15 07:51:10,459 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = blacklistDisablePercent is 33 2015-02-15 07:51:10,576 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.require.client.cert; Ignoring. 2015-02-15 07:51:10,577 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.retry.interval; = Ignoring. 2015-02-15 07:51:10,577 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.client.conf; Ignoring. 2015-02-15 07:51:10,577 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.keystores.factory.class; Ignoring. 2015-02-15 07:51:10,578 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.server.conf; Ignoring. 2015-02-15 07:51:10,592 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 2015-02-15 07:51:10,602 INFO [main] = org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at = quickstart.cloudera/myclient:8030 2015-02-15 07:51:10,749 WARN [main] = org.apache.hadoop.security.UserGroupInformation: = PriviledgedActionException as:cloudera (auth:SIMPLE) = cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.to= ken.SecretManager$InvalidToken): appattempt_1424003606313_0001_000002 = not found in AMRMTokenSecretManager. 2015-02-15 07:51:10,750 WARN [main] org.apache.hadoop.ipc.Client: = Exception encountered while connecting to the server : = org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se= cretManager$InvalidToken): appattempt_1424003606313_0001_000002 not = found in AMRMTokenSecretManager. 2015-02-15 07:51:10,750 WARN [main] = org.apache.hadoop.security.UserGroupInformation: = PriviledgedActionException as:cloudera (auth:SIMPLE) = cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.to= ken.SecretManager$InvalidToken): appattempt_1424003606313_0001_000002 = not found in AMRMTokenSecretManager. 2015-02-15 07:51:10,762 ERROR [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Exception = while registering org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424003606313_0001_000002 not found in = AMRMTokenSecretManager. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native = Method) at = sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc= cessorImpl.java:57) at = sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst= ructorAccessorImpl.java:45) at = java.lang.reflect.Constructor.newInstance(Constructor.java:526) at = org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) at = org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:1= 04) at = org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja= va:109) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :57) at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at = org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc= ationHandler.java:187) at = org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH= andler.java:102) at com.sun.proxy.$Proxy36.registerApplicationMaster(Unknown = Source) at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunica= tor.java:161) at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommu= nicator.java:122) at = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(R= MContainerAllocator.java:238) at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.s= erviceStart(MRAppMaster.java:807) at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)= at = org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.= java:120) at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.j= ava:1075) at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:147= 8) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1642) at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAp= pMaster.java:1474) at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1407= ) Caused by: = org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se= cretManager$InvalidToken): appattempt_1424003606313_0001_000002 not = found in AMRMTokenSecretManager. at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at = org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.= java:206) at com.sun.proxy.$Proxy35.registerApplicationMaster(Unknown = Source) at = org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja= va:106) ... 22 more 2015-02-15 07:51:10,765 INFO [main] = org.apache.hadoop.service.AbstractService: Service RMCommunicator failed = in state STARTED; cause: = org.apache.hadoop.yarn.exceptions.YarnRuntimeException: = org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424003606313_0001_000002 not found in = AMRMTokenSecretManager. org.apache.hadoop.yarn.exceptions.YarnRuntimeException: = org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424003606313_0001_000002 not found in = AMRMTokenSecretManager. at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunica= tor.java:178) at org.apac =20 Any help is greatly appreciated.=20 =20 ------=_NextPart_000_0063_01D04BB8.3CA07A00 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

I have = been searching for a handle on a problem without very little clues. Any = help pointing me to the right direction will be huge.

I have not received any input form the Cloudera google = groups. Perhaps this is more Yarn based and I am hoping I have more luck = here.

Any help is greatly = appreciated.

 

I am running a Hadoop cluster using CDH5.3. I also = have a client machine with a standalone one node setup = (VM).

 

All environments are running CentOS = 6.6.

 

I have submitted some Java mapreduce jobs locally on = both the cluster and the standalone environment with successfully = completions. =C2=A0=C2=A0

 

I can submit = a remote HDFS job from client to cluster using -conf hadoop-cluster.xml = (see below) and get data back from the cluster with no = problem.

When submitted remotely the = mapreduce jobs remotely, I get an AM error:

 

AM fails the job = with the error:


=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0SecretManager$InvalidToken: appattempt_1424003606313_0001_000002 not = found in AMRMTokenSecretManager


I searched = /var/log/secure on the client and cluster with no unusual = messages.

Here is the contents of = hadoop-cluster.xml:

<?xml version=3D"1.0" = encoding=3D"UTF-8"?>

<!--generated by = Roland-->
<configuration>
  = <property>
    = <name>fs.defaultFS</name>
    = <value>hdfs://mycluser:8020</value>
  = </property>
  <property>
    = <name>mapreduce.jobtracker.address</name>
  &nbs= p; <value>hdfs://mycluster:8032</value>
  = </property>
  <property>
    = <name>yarn.resourcemanager.address</name>
  &nbs= p; <value>hdfs://mycluster:8032</value>
  = </property>

Here is the output from the job log on the = cluster: 

2015-02-15 = 07:51:06,544 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: = Created MRAppMaster for application = appattempt_1424003606313_0001_000002

2015-02-15 07:51:06,949 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.require.client.cert;=C2=A0 = Ignoring.

2015-02-15 = 07:51:06,952 WARN [main] org.apache.hadoop.conf.Configuration: = job.xml:an attempt to override final parameter: = mapreduce.job.end-notification.max.retry.interval;=C2=A0 = Ignoring.

2015-02-15 = 07:51:06,952 WARN [main] org.apache.hadoop.conf.Configuration: = job.xml:an attempt to override final parameter: = hadoop.ssl.client.conf;=C2=A0 Ignoring.

2015-02-15 07:51:06,954 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.keystores.factory.class;=C2=A0 = Ignoring.

2015-02-15 = 07:51:06,957 WARN [main] org.apache.hadoop.conf.Configuration: = job.xml:an attempt to override final parameter: = hadoop.ssl.server.conf;=C2=A0 Ignoring.

2015-02-15 07:51:06,973 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.attempts;=C2=A0 = Ignoring.

2015-02-15 = 07:51:07,241 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: = Executing with tokens:

2015-02-15 = 07:51:07,241 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: = Kind: YARN_AM_RM_TOKEN, Service: , Ident: = (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@33be1aa0)=

2015-02-15 = 07:51:07,332 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: = Using mapred newApiCommitter.

2015-02-15 07:51:07,627 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.require.client.cert;=C2=A0 = Ignoring.

2015-02-15 = 07:51:07,632 WARN [main] org.apache.hadoop.conf.Configuration: = job.xml:an attempt to override final parameter: = mapreduce.job.end-notification.max.retry.interval;=C2=A0 = Ignoring.

2015-02-15 = 07:51:07,632 WARN [main] org.apache.hadoop.conf.Configuration: = job.xml:an attempt to override final parameter: = hadoop.ssl.client.conf;=C2=A0 Ignoring.

2015-02-15 07:51:07,639 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.keystores.factory.class;=C2=A0 = Ignoring.

2015-02-15 = 07:51:07,645 WARN [main] org.apache.hadoop.conf.Configuration: = job.xml:an attempt to override final parameter: = hadoop.ssl.server.conf;=C2=A0 Ignoring.

2015-02-15 07:51:07,663 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.attempts;=C2=A0 = Ignoring.

2015-02-15 = 07:51:08,237 WARN [main] org.apache.hadoop.util.NativeCodeLoader: Unable = to load native-hadoop library for your platform... using builtin-java = classes where applicable

2015-02-15 = 07:51:08,429 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: = OutputCommitter set in config null

2015-02-15 07:51:08,499 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is = org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter

2015-02-15 = 07:51:08,526 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: = Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for = class = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler<= /span>

2015-02-15 = 07:51:08,527 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: = Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher

2015-02-15 = 07:51:08,561 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: = Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher

2015-02-15 = 07:51:08,562 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: = Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher=

2015-02-15 = 07:51:08,566 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: = Registering class = org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class = org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler

2015-02-15 = 07:51:08,568 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: = Registering class = org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher<= o:p>

2015-02-15 = 07:51:08,568 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: = Registering class = org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter

2015-02-15 = 07:51:08,570 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: = Registering class = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType = for class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter

2015-02-15 = 07:51:08,599 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: = Recovery is enabled. Will try to recover from previous life on best = effort basis.

2015-02-15 = 07:51:08,642 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: = Previous history file is at hdfs://mycluster.com:8020/user/cloudera/.staging/job_142400360= 6313_0001/job_1424003606313_0001_1.jhist

2015-02-15 07:51:09,147 INFO = [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Read completed = tasks from history 0

2015-02-15 = 07:51:09,193 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: = Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler=

2015-02-15 = 07:51:09,222 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: = loaded properties from = hadoop-metrics2.properties

2015-02-15 07:51:09,277 INFO [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot = period at 10 second(s).

2015-02-15 = 07:51:09,277 INFO [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics = system started

2015-02-15 = 07:51:09,286 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token = for job_1424003606313_0001 to = jobTokenSecretManager

2015-02-15 = 07:51:09,306 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing = job_1424003606313_0001 because: not enabled; too much = RAM;

2015-02-15 = 07:51:09,324 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job = job_1424003606313_0001 =3D 5343207. Number of splits =3D = 5

2015-02-15 = 07:51:09,325 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces = for job job_1424003606313_0001 =3D 1

2015-02-15 07:51:09,325 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: = job_1424003606313_0001Job Transitioned from NEW to = INITED

2015-02-15 = 07:51:09,327 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: = MRAppMaster launching normal, non-uberized, multi-container job = job_1424003606313_0001.

2015-02-15 = 07:51:09,387 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using = callQueue class = java.util.concurrent.LinkedBlockingQueue

2015-02-15 07:51:09,398 INFO [Socket Reader #1 for port 56348] = org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port = 56348

2015-02-15 = 07:51:09,418 INFO [main] = org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding = protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the = server

2015-02-15 = 07:51:09,418 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: = IPC Server Responder: starting

2015-02-15 07:51:09,419 INFO [main] = org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated = MRClientService at mycluster/mycluster:56348

2015-02-15 07:51:09,425 INFO [IPC Server listener on 56348] = org.apache.hadoop.ipc.Server: IPC Server listener on 56348: = starting

2015-02-15 = 07:51:09,492 INFO [main] org.mortbay.log: Logging to = org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via = org.mortbay.log.Slf4jLog

2015-02-15 = 07:51:09,497 INFO [main] org.apache.hadoop.http.HttpRequestLog: Http = request log for http.requests.mapreduce is not = defined

2015-02-15 = 07:51:09,509 INFO [main] org.apache.hadoop.http.HttpServer2: Added = global filter 'safety' = (class=3Dorg.apache.hadoop.http.HttpServer2$QuotingInputFilter)

2015-02-15 = 07:51:09,514 INFO [main] org.apache.hadoop.http.HttpServer2: Added = filter AM_PROXY_FILTER = (class=3Dorg.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to = context mapreduce

2015-02-15 = 07:51:09,514 INFO [main] org.apache.hadoop.http.HttpServer2: Added = filter AM_PROXY_FILTER = (class=3Dorg.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to = context static

2015-02-15 = 07:51:09,518 INFO [main] org.apache.hadoop.http.HttpServer2: adding path = spec: /mapreduce/*

2015-02-15 = 07:51:09,518 INFO [main] org.apache.hadoop.http.HttpServer2: adding path = spec: /ws/*

2015-02-15 = 07:51:09,529 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound = to port 34473

2015-02-15 = 07:51:09,529 INFO [main] org.mortbay.log: = jetty-6.1.26.cloudera.4

2015-02-15 = 07:51:09,561 INFO [main] org.mortbay.log: Extract = jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yar= n-common-2.5.0-cdh5.3.0.jar!/webapps/mapreduce to = /tmp/Jetty_0_0_0_0_34473_mapreduce____.ezh3w6/webapp

2015-02-15 = 07:51:09,932 INFO [main] org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:34473=

2015-02-15 = 07:51:09,932 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Web app = /mapreduce started at 34473

2015-02-15 07:51:10,425 INFO [main] = org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice = modules

2015-02-15 = 07:51:10,430 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using = callQueue class = java.util.concurrent.LinkedBlockingQueue

2015-02-15 07:51:10,431 INFO [Socket Reader #1 for port 41190] = org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port = 41190

2015-02-15 = 07:51:10,438 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: = IPC Server Responder: starting

2015-02-15 07:51:10,438 INFO [IPC Server listener on 41190] = org.apache.hadoop.ipc.Server: IPC Server listener on 41190: = starting

2015-02-15 = 07:51:10,459 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = nodeBlacklistingEnabled:true

2015-02-15 07:51:10,459 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = maxTaskFailuresPerNode is 3

2015-02-15 07:51:10,459 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = blacklistDisablePercent is 33

2015-02-15 07:51:10,576 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.require.client.cert;=C2=A0 = Ignoring.

2015-02-15 = 07:51:10,577 WARN [main] org.apache.hadoop.conf.Configuration: = job.xml:an attempt to override final parameter: = mapreduce.job.end-notification.max.retry.interval;=C2=A0 = Ignoring.

2015-02-15 = 07:51:10,577 WARN [main] org.apache.hadoop.conf.Configuration: = job.xml:an attempt to override final parameter: = hadoop.ssl.client.conf;=C2=A0 Ignoring.

2015-02-15 07:51:10,577 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.keystores.factory.class;=C2=A0 = Ignoring.

2015-02-15 = 07:51:10,578 WARN [main] org.apache.hadoop.conf.Configuration: = job.xml:an attempt to override final parameter: = hadoop.ssl.server.conf;=C2=A0 Ignoring.

2015-02-15 07:51:10,592 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.attempts;=C2=A0 = Ignoring.

2015-02-15 = 07:51:10,602 INFO [main] org.apache.hadoop.yarn.client.RMProxy: = Connecting to ResourceManager at = quickstart.cloudera/myclient:8030

2015-02-15 07:51:10,749 WARN [main] = org.apache.hadoop.security.UserGroupInformation: = PriviledgedActionException as:cloudera (auth:SIMPLE) = cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.to= ken.SecretManager$InvalidToken): appattempt_1424003606313_0001_000002 = not found in AMRMTokenSecretManager.

2015-02-15 07:51:10,750 WARN [main] org.apache.hadoop.ipc.Client: = Exception encountered while connecting to the server : = org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se= cretManager$InvalidToken): appattempt_1424003606313_0001_000002 not = found in AMRMTokenSecretManager.

2015-02-15 07:51:10,750 WARN [main] = org.apache.hadoop.security.UserGroupInformation: = PriviledgedActionException as:cloudera (auth:SIMPLE) = cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.to= ken.SecretManager$InvalidToken): appattempt_1424003606313_0001_000002 = not found in AMRMTokenSecretManager.

2015-02-15 07:51:10,762 ERROR [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Exception = while registering

org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424003606313_0001_000002 not found in = AMRMTokenSecretManager.

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native = Method)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc= cessorImpl.java:57)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst= ructorAccessorImpl.java:45)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = java.lang.reflect.Constructor.newInstance(Constructor.java:526)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)<= o:p>

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:1= 04)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja= va:109)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = sun.reflect.NativeMethodAccessorImpl.invoke0(Native = Method)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :57)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:43)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = java.lang.reflect.Method.invoke(Method.java:606)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc= ationHandler.java:187)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH= andler.java:102)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = com.sun.proxy.$Proxy36.registerApplicationMaster(Unknown = Source)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunica= tor.java:161)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommu= nicator.java:122)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(R= MContainerAllocator.java:238)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.s= erviceStart(MRAppMaster.java:807)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.= java:120)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.j= ava:1075)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:147= 8)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = java.security.AccessController.doPrivileged(Native = Method)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = javax.security.auth.Subject.doAs(Subject.java:415)

<= p class=3DMsoNormal>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1642)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAp= pMaster.java:1474)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1407= )

Caused by: = org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se= cretManager$InvalidToken): appattempt_1424003606313_0001_000002 not = found in AMRMTokenSecretManager.

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.ipc.Client.call(Client.java:1411)

=

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.ipc.Client.call(Client.java:1364)

=

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.= java:206)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = com.sun.proxy.$Proxy35.registerApplicationMaster(Unknown = Source)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja= va:106)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ... 22 = more

2015-02-15 = 07:51:10,765 INFO [main] org.apache.hadoop.service.AbstractService: = Service RMCommunicator failed in state STARTED; cause: = org.apache.hadoop.yarn.exceptions.YarnRuntimeException: = org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424003606313_0001_000002 not found in = AMRMTokenSecretManager.

org.apache.hadoop.yarn.exceptions.YarnRuntimeException: = org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424003606313_0001_000002 not found in = AMRMTokenSecretManager.

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunica= tor.java:178)

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at = org.apac

 
Any help = is greatly appreciated.

 

------=_NextPart_000_0063_01D04BB8.3CA07A00--