Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7F6D01721D for ; Sun, 22 Feb 2015 12:51:44 +0000 (UTC) Received: (qmail 52365 invoked by uid 500); 22 Feb 2015 12:43:26 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 52212 invoked by uid 500); 22 Feb 2015 12:43:26 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 52196 invoked by uid 99); 22 Feb 2015 12:43:25 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 22 Feb 2015 12:43:25 +0000 Received-SPF: pass (nike.apache.org: domain of roland.depratti@cox.net designates 68.230.241.218 as permitted sender) Received: from [68.230.241.218] (HELO eastrmfepo203.cox.net) (68.230.241.218) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 22 Feb 2015 12:42:54 +0000 Received: from eastrmimpo306 ([68.230.241.238]) by eastrmfepo203.cox.net (InterMail vM.8.01.05.15 201-2260-151-145-20131218) with ESMTP id <20150222124249.XFMG9245.eastrmfepo203.cox.net@eastrmimpo306> for ; Sun, 22 Feb 2015 07:42:49 -0500 Received: from MainPC ([72.195.141.1]) by eastrmimpo306 with cox id vQii1p00P020h3401Qijbk; Sun, 22 Feb 2015 07:42:49 -0500 X-CT-Score: NA X-Authority-Analysis: v=2.0 cv=UaaPvtuN c=1 sm=1 a=w7pKjAbGsLCWhSjHPtj8rw==:17 a=kviXuzpPAAAA:8 a=9fzzACetAAAA:8 a=mV9VRH-2AAAA:8 a=vSGtT91vAAAA:8 a=lVBABN-2AAAA:8 a=Mzh-7l9-AAAA:8 a=DwiIph00AAAA:8 a=_Ii05PlEAAAA:8 a=7wSVOvAqyJKMfX1fBlEA:9 a=wPNLvfGTeEIA:10 a=ANQIFRr-Ju4A:10 a=bd5xfFXJ1iwA:10 a=mPJKKpKRp8XGPbTn:21 a=7FJynDdDH1jX40Mc:21 a=yMhMjlubAAAA:8 a=SSmOFEACAAAA:8 a=f91QbMKBVUJHstDKA08A:9 a=gKO2Hq4RSVkA:10 a=UiCQ7L4-1S4A:10 a=hTZeC7Yk6K0A:10 a=frz4AuCg-hUA:10 a=8KZuwnWgX9QfXIEh:21 a=BMkb5Bu1PpanWIgd:21 a=-yTmPlQJGSCm4N90H2wA:9 a=Sf_gFPzhefAA:10 a=-0qkl0XTG-IA:10 a=FkRjl84oKtDGu-Mh:21 a=RnxuZq3AZapyanDh:21 a=w7pKjAbGsLCWhSjHPtj8rw==:117 X-CM-Score: 0.00 Authentication-Results: cox.net; auth=pass (LOGIN) smtp.auth=roland.depratti@cox.net From: "Roland DePratti" To: References: <54E65C2B.8070506@ulul.org> <54E65EBC.7050206@ulul.org> <00ba01d04caa$2410b390$6c321ab0$@cox.net> <00d701d04cb2$382e1c20$a88a5460$@cox.net> <54E943D4.5060507@ulul.org> In-Reply-To: <54E943D4.5060507@ulul.org> Subject: RE: Yarn AM is abending job when submitting a remote job to cluster Date: Sun, 22 Feb 2015 07:42:42 -0500 Message-ID: <005701d04e9d$0d613210$28239630$@cox.net> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_0058_01D04E73.24958A50" X-Mailer: Microsoft Outlook 14.0 Thread-Index: AQFQuGa6sOFAM//VUqp6c7kHOLNyRQExPsVlAfUey+0BEjRYqADPdDJmAMEtXFsBxqASRJ2/RHVw Content-Language: en-us This is a multipart message in MIME format. ------=_NextPart_000_0058_01D04E73.24958A50 Content-Type: multipart/alternative; boundary="----=_NextPart_001_0059_01D04E73.24958A50" ------=_NextPart_001_0059_01D04E73.24958A50 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Ulul, =20 I appreciate your help and trying my use case. I think I have a lot of = good details for you. =20 Here is my commands: =20 hadoop jar avgwordlength.jar solution.AvgWordLength -conf ~/conf/hadoop-cluster.xml /user/cloudera/shakespeare wordlengths7. Since my last email, I examined the syslogs ( I ran both jobs with debug turned on) for both the remote abend and the local successful run on the cluster server. I have attached both logs, plus a file where I posted my manual = comparison findings, and the config xml file Briefly, here is what I found (more details in Comparison Log w/ Notes file): 1. Both logs follow the same steps with same outcome from the = beginning to line 1590. 2. At line 1590 both logs record a AMRMTokenSelector Looking for = Token with service - The successful job does this on the cluster server = (192.168.2.253) since it was run locally. - The abending job does this on the client vm (192.168.2.185) 3. After that point the logs are not the same until JobHistory kicks = in - The abending log spends a lot of time trying to handle the error - The successful job begins processing the job. o At line 1615 it setup the queue (root.cloudera) o At line 1651 JOB_SETUP_Complete is reported. o Both of these messages do not appear in the abended log. =20 My guess is this a setup problem that I produced =96 I just can=92t find = it. =20 - rd =20 =20 From: Ulul [mailto:hadoop@ulul.org]=20 Sent: Saturday, February 21, 2015 9:50 PM To: user@hadoop.apache.org Subject: Re: Yarn AM is abending job when submitting a remote job to = cluster =20 Hi Roland I tried to reproduce your problem with a single node setup submitting a = job to a remote cluster (please note I'm an HDP user, it's a sandbox = submitting to a 3 VMs cluster) It worked like a charm... I run into problems when submitting the job from another user but with a permission problem, it does not look like your AMRMToken problem. We are probably submitting our jobs differently though. I use hadoop jar --config , you seem to be using something different since you = have the -conf generic option Would you please share your job command ? Ulul=20 Le 20/02/2015 03:09, Roland DePratti a =E9crit : Xuan, =20 Thanks for asking. Here is the RM log. It almost looks like the log completes successfully (see red highlighting). =20 =20 =20 2015-02-19 19:55:43,315 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated = new applicationId: 12 2015-02-19 19:55:44,659 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: = Application with id 12 submitted by user cloudera 2015-02-19 19:55:44,659 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1424003606313_0012 2015-02-19 19:55:44,659 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: = USER=3Dcloudera IP=3D192.168.2.185 OPERATION=3DSubmit Application Request TARGET=3DClientRMService RESULT=3DSUCCESS APPID=3Dapplication_1424003606313_0012 2015-02-19 19:55:44,659 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1424003606313_0012 State change from NEW to NEW_SAVING 2015-02-19 19:55:44,659 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: = Storing info for app: application_1424003606313_0012 2015-02-19 19:55:44,660 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1424003606313_0012 State change from NEW_SAVING to SUBMITTED 2015-02-19 19:55:44,666 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Accepted application application_1424003606313_0012 from user: cloudera, = in queue: default, currently num of applications: 1 2015-02-19 19:55:44,667 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1424003606313_0012 State change from SUBMITTED to ACCEPTED 2015-02-19 19:55:44,667 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1424003606313_0012_000001 2015-02-19 19:55:44,667 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : appattempt_1424003606313_0012_000001 State change from NEW to = SUBMITTED 2015-02-19 19:55:44,667 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Added Application Attempt appattempt_1424003606313_0012_000001 to = scheduler from user: cloudera 2015-02-19 19:55:44,669 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : appattempt_1424003606313_0012_000001 State change from SUBMITTED to SCHEDULED 2015-02-19 19:55:50,671 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_01_000001 Container Transitioned from NEW = to ALLOCATED 2015-02-19 19:55:50,671 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: = USER=3Dcloudera OPERATION=3DAM Allocated Container TARGET=3DSchedulerApp = RESULT=3DSUCCESS APPID=3Dapplication_1424003606313_0012 CONTAINERID=3Dcontainer_1424003606313_0012_01_000001 2015-02-19 19:55:50,671 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1424003606313_0012_01_000001 of capacity on host hadoop0.rdpratti.com:8041, which has 1 containers, used and available after allocation 2015-02-19 19:55:50,672 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManag= erI nRM: Sending NMToken for nodeId : hadoop0.rdpratti.com:8041 for = container : container_1424003606313_0012_01_000001 2015-02-19 19:55:50,672 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_01_000001 Container Transitioned from = ALLOCATED to ACQUIRED 2015-02-19 19:55:50,673 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManag= erI nRM: Clear node set for appattempt_1424003606313_0012_000001 2015-02-19 19:55:50,673 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : Storing attempt: AppId: application_1424003606313_0012 AttemptId: appattempt_1424003606313_0012_000001 MasterContainer: Container: [ContainerId: container_1424003606313_0012_01_000001, NodeId: hadoop0.rdpratti.com:8041, NodeHttpAddress: hadoop0.rdpratti.com:8042, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.2.253:8041 }, ] 2015-02-19 19:55:50,673 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : appattempt_1424003606313_0012_000001 State change from SCHEDULED to ALLOCATED_SAVING 2015-02-19 19:55:50,673 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : appattempt_1424003606313_0012_000001 State change from = ALLOCATED_SAVING to ALLOCATED 2015-02-19 19:55:50,673 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1424003606313_0012_000001 2015-02-19 19:55:50,674 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: = Setting up container Container: [ContainerId: container_1424003606313_0012_01_000001, NodeId: = hadoop0.rdpratti.com:8041, NodeHttpAddress: hadoop0.rdpratti.com:8042, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.2.253:8041 }, ] for AM appattempt_1424003606313_0012_000001 2015-02-19 19:55:50,675 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: = Command to launch container container_1424003606313_0012_01_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=3Dcontainer-log4j.properties -Dyarn.app.container.log.dir=3D = -Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.logger=3DINFO,CLA -Djava.net.preferIPv4Stack=3Dtrue = -Xmx209715200 org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1>/stdout 2>/stderr=20 2015-02-19 19:55:50,675 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretMan= age r: Create AMRMToken for ApplicationAttempt: appattempt_1424003606313_0012_000001 2015-02-19 19:55:50,675 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretMan= age r: Creating password for appattempt_1424003606313_0012_000001 2015-02-19 19:55:50,688 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: = Done launching container Container: [ContainerId: container_1424003606313_0012_01_000001, NodeId: = hadoop0.rdpratti.com:8041, NodeHttpAddress: hadoop0.rdpratti.com:8042, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.2.253:8041 }, ] for AM appattempt_1424003606313_0012_000001 2015-02-19 19:55:50,688 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : appattempt_1424003606313_0012_000001 State change from ALLOCATED to LAUNCHED 2015-02-19 19:55:50,928 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_01_000001 Container Transitioned from = ACQUIRED to RUNNING 2015-02-19 19:55:57,941 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_01_000001 Container Transitioned from = RUNNING to COMPLETED 2015-02-19 19:55:57,941 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt= : Completed container: container_1424003606313_0012_01_000001 in state: COMPLETED event:FINISHED 2015-02-19 19:55:57,942 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: = USER=3Dcloudera OPERATION=3DAM Released Container TARGET=3DSchedulerApp = RESULT=3DSUCCESS APPID=3Dapplication_1424003606313_0012 CONTAINERID=3Dcontainer_1424003606313_0012_01_000001 2015-02-19 19:55:57,942 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1424003606313_0012_01_000001 of capacity on host hadoop0.rdpratti.com:8041, which = currently has 0 containers, used and available, release resources=3Dtrue 2015-02-19 19:55:57,942 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Application attempt appattempt_1424003606313_0012_000001 released = container container_1424003606313_0012_01_000001 on node: host: hadoop0.rdpratti.com:8041 #containers=3D0 available=3D1457 used=3D0 with = event: FINISHED 2015-02-19 19:55:57,942 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : Updating application attempt appattempt_1424003606313_0012_000001 with final state: FAILED, and exit status: 1 2015-02-19 19:55:57,942 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : appattempt_1424003606313_0012_000001 State change from LAUNCHED to FINAL_SAVING 2015-02-19 19:55:57,942 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1424003606313_0012_000001 2015-02-19 19:55:57,943 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretMan= age r: Application finished, removing password for appattempt_1424003606313_0012_000001 2015-02-19 19:55:57,943 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : appattempt_1424003606313_0012_000001 State change from FINAL_SAVING to FAILED 2015-02-19 19:55:57,943 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Application appattempt_1424003606313_0012_000001 is done. = finalState=3DFAILED 2015-02-19 19:55:57,943 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1424003606313_0012_000002 2015-02-19 19:55:57,943 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo= : Application application_1424003606313_0012 requests cleared 2015-02-19 19:55:57,943 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : appattempt_1424003606313_0012_000002 State change from NEW to = SUBMITTED 2015-02-19 19:55:57,943 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Added Application Attempt appattempt_1424003606313_0012_000002 to = scheduler from user: cloudera 2015-02-19 19:55:57,943 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : appattempt_1424003606313_0012_000002 State change from SUBMITTED to SCHEDULED 2015-02-19 19:55:58,941 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Null container completed... 2015-02-19 19:56:03,950 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_02_000001 Container Transitioned from NEW = to ALLOCATED 2015-02-19 19:56:03,950 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: = USER=3Dcloudera OPERATION=3DAM Allocated Container TARGET=3DSchedulerApp = RESULT=3DSUCCESS APPID=3Dapplication_1424003606313_0012 CONTAINERID=3Dcontainer_1424003606313_0012_02_000001 2015-02-19 19:56:03,950 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1424003606313_0012_02_000001 of capacity on host hadoop0.rdpratti.com:8041, which has 1 containers, used and available after allocation 2015-02-19 19:56:03,950 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManag= erI nRM: Sending NMToken for nodeId : hadoop0.rdpratti.com:8041 for = container : container_1424003606313_0012_02_000001 2015-02-19 19:56:03,951 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_02_000001 Container Transitioned from = ALLOCATED to ACQUIRED 2015-02-19 19:56:03,951 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManag= erI nRM: Clear node set for appattempt_1424003606313_0012_000002 2015-02-19 19:56:03,951 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : Storing attempt: AppId: application_1424003606313_0012 AttemptId: appattempt_1424003606313_0012_000002 MasterContainer: Container: [ContainerId: container_1424003606313_0012_02_000001, NodeId: hadoop0.rdpratti.com:8041, NodeHttpAddress: hadoop0.rdpratti.com:8042, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.2.253:8041 }, ] 2015-02-19 19:56:03,952 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : appattempt_1424003606313_0012_000002 State change from SCHEDULED to ALLOCATED_SAVING 2015-02-19 19:56:03,952 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : appattempt_1424003606313_0012_000002 State change from = ALLOCATED_SAVING to ALLOCATED 2015-02-19 19:56:03,952 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1424003606313_0012_000002 2015-02-19 19:56:03,953 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: = Setting up container Container: [ContainerId: container_1424003606313_0012_02_000001, NodeId: = hadoop0.rdpratti.com:8041, NodeHttpAddress: hadoop0.rdpratti.com:8042, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.2.253:8041 }, ] for AM appattempt_1424003606313_0012_000002 2015-02-19 19:56:03,953 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: = Command to launch container container_1424003606313_0012_02_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=3Dcontainer-log4j.properties -Dyarn.app.container.log.dir=3D = -Dyarn.app.container.log.filesize=3D0 -Dhadoop.root.logger=3DINFO,CLA -Djava.net.preferIPv4Stack=3Dtrue = -Xmx209715200 org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1>/stdout 2>/stderr=20 2015-02-19 19:56:03,953 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretMan= age r: Create AMRMToken for ApplicationAttempt: appattempt_1424003606313_0012_000002 2015-02-19 19:56:03,953 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretMan= age r: Creating password for appattempt_1424003606313_0012_000002 2015-02-19 19:56:03,974 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: = Done launching container Container: [ContainerId: container_1424003606313_0012_02_000001, NodeId: = hadoop0.rdpratti.com:8041, NodeHttpAddress: hadoop0.rdpratti.com:8042, Resource: , Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.2.253:8041 }, ] for AM appattempt_1424003606313_0012_000002 2015-02-19 19:56:03,974 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : appattempt_1424003606313_0012_000002 State change from ALLOCATED to LAUNCHED 2015-02-19 19:56:04,947 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_02_000001 Container Transitioned from = ACQUIRED to RUNNING 2015-02-19 19:56:10,956 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_02_000001 Container Transitioned from = RUNNING to COMPLETED 2015-02-19 19:56:10,956 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt= : Completed container: container_1424003606313_0012_02_000001 in state: COMPLETED event:FINISHED 2015-02-19 19:56:10,956 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: = USER=3Dcloudera OPERATION=3DAM Released Container TARGET=3DSchedulerApp = RESULT=3DSUCCESS APPID=3Dapplication_1424003606313_0012 CONTAINERID=3Dcontainer_1424003606313_0012_02_000001 2015-02-19 19:56:10,956 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1424003606313_0012_02_000001 of capacity on host hadoop0.rdpratti.com:8041, which = currently has 0 containers, used and available, release resources=3Dtrue 2015-02-19 19:56:10,956 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : Updating application attempt appattempt_1424003606313_0012_000002 with final state: FAILED, and exit status: 1 2015-02-19 19:56:10,956 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Application attempt appattempt_1424003606313_0012_000002 released = container container_1424003606313_0012_02_000001 on node: host: hadoop0.rdpratti.com:8041 #containers=3D0 available=3D1457 used=3D0 with = event: FINISHED 2015-02-19 19:56:10,956 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : appattempt_1424003606313_0012_000002 State change from LAUNCHED to FINAL_SAVING 2015-02-19 19:56:10,956 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1424003606313_0012_000002 2015-02-19 19:56:10,957 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretMan= age r: Application finished, removing password for appattempt_1424003606313_0012_000002 2015-02-19 19:56:10,957 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl : appattempt_1424003606313_0012_000002 State change from FINAL_SAVING to FAILED 2015-02-19 19:56:10,957 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1424003606313_0012 with final state: FAILED 2015-02-19 19:56:10,957 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1424003606313_0012 State change from ACCEPTED to = FINAL_SAVING 2015-02-19 19:56:10,957 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1424003606313_0012 2015-02-19 19:56:10,957 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Application appattempt_1424003606313_0012_000002 is done. = finalState=3DFAILED 2015-02-19 19:56:10,957 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo= : Application application_1424003606313_0012 requests cleared 2015-02-19 19:56:10,990 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: = Application application_1424003606313_0012 failed 2 times due to AM Container for appattempt_1424003606313_0012_000002 exited with exitCode: 1 due to: Exception from container-launch. Container id: container_1424003606313_0012_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=3D1:=20 at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)= at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launch= Con tainer(DefaultContainerExecutor.java:197) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Conta= ine rLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Conta= ine rLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :11 45) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:6 15) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 .Failing this attempt.. Failing the application. 2015-02-19 19:56:10,990 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1424003606313_0012 State change from FINAL_SAVING to FAILED 2015-02-19 19:56:10,991 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: = USER=3Dcloudera OPERATION=3DApplication Finished - Failed TARGET=3DRMAppManager RESULT=3DFAILURE DESCRIPTION=3DApp failed with state: FAILED PERMISSIONS=3DApplication application_1424003606313_0012 failed 2 times = due to AM Container for appattempt_1424003606313_0012_000002 exited with = exitCode: 1 due to: Exception from container-launch. Container id: container_1424003606313_0012_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=3D1:=20 at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)= at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launch= Con tainer(DefaultContainerExecutor.java:197) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Conta= ine rLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Conta= ine rLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :11 45) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:6 15) at java.lang.Thread.run(Thread.java:745) =20 =20 From: Xuan Gong [mailto:xgong@hortonworks.com]=20 Sent: Thursday, February 19, 2015 8:23 PM To: user@hadoop.apache.org Subject: Re: Yarn AM is abending job when submitting a remote job to = cluster =20 Hey, Roland: Could you also check the RM logs for this application, please ? = Maybe we could find something there. =20 Thanks =20 Xuan Gong =20 From: Roland DePratti Reply-To: "user@hadoop.apache.org" Date: Thursday, February 19, 2015 at 5:11 PM To: "user@hadoop.apache.org" Subject: RE: Yarn AM is abending job when submitting a remote job to = cluster =20 No, I hear you. =20 =20 I was just stating that the fact that hdfs works, there is something = right about the connectivity, that=92s all, i.e. Server is reachable, hadoop = was able to process the request =96 but like you said, doesn=92t mean yarn = works. =20 I tried both your solution and Alex=92s solution unfortunately without = any improvement. =20 Here is the command I am executing: =20 hadoop jar avgWordlength.jar solution.AvgWordLength -conf ~/conf/hadoop-cluster.xml /user/cloudera/shakespeare wordlength4 =20 Here is the new hadoop-cluseter.xml =20 fs.defaultFS hdfs://hadoop0.rdpratti.com:8020 mapreduce.jobtracker.address hadoop0.rdpratti.com:8032 yarn.resourcemanager.address hadoop0.rdpratti.com:8032 I also deleted the .staging directory under the submitting user. Plus restarted Job History Server.=20 =20 Resubmitted the job with the same result. Here is the log: =20 2015-02-19 19:56:05,061 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1424003606313_0012_000002 2015-02-19 19:56:05,468 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: hadoop.ssl.require.client.cert; Ignoring. 2015-02-19 19:56:05,471 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 2015-02-19 19:56:05,471 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: hadoop.ssl.client.conf; Ignoring. 2015-02-19 19:56:05,473 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: hadoop.ssl.keystores.factory.class; Ignoring. 2015-02-19 19:56:05,476 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: hadoop.ssl.server.conf; Ignoring. 2015-02-19 19:56:05,490 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 2015-02-19 19:56:05,621 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens: 2015-02-19 19:56:05,621 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@3909f88f) 2015-02-19 19:56:05,684 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter. 2015-02-19 19:56:05,923 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: hadoop.ssl.require.client.cert; Ignoring. 2015-02-19 19:56:05,925 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 2015-02-19 19:56:05,929 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: hadoop.ssl.client.conf; Ignoring. 2015-02-19 19:56:05,930 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: hadoop.ssl.keystores.factory.class; Ignoring. 2015-02-19 19:56:05,934 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: hadoop.ssl.server.conf; Ignoring. 2015-02-19 19:56:05,958 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 2015-02-19 19:56:06,529 WARN [main] = org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using = builtin-java classes where applicable 2015-02-19 19:56:06,719 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config null 2015-02-19 19:56:06,837 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 2015-02-19 19:56:06,881 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler 2015-02-19 19:56:06,882 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher 2015-02-19 19:56:06,882 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher 2015-02-19 19:56:06,883 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for = class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher= 2015-02-19 19:56:06,884 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler 2015-02-19 19:56:06,885 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for = class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher 2015-02-19 19:56:06,885 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for = class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter 2015-02-19 19:56:06,886 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType = for class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter 2015-02-19 19:56:06,899 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Recovery is enabled. = Will try to recover from previous life on best effort basis. 2015-02-19 19:56:06,918 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Previous history file is = at hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424003606313= _00 12/job_1424003606313_0012_1.jhist 2015-02-19 19:56:07,377 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Read completed tasks = from history 0 2015-02-19 19:56:07,423 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for = class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler 2015-02-19 19:56:07,453 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2015-02-19 19:56:07,507 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot = period at 10 second(s). 2015-02-19 19:56:07,507 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics system started 2015-02-19 19:56:07,515 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token = for job_1424003606313_0012 to jobTokenSecretManager 2015-02-19 19:56:07,536 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing job_1424003606313_0012 because: not enabled; too much RAM; 2015-02-19 19:56:07,555 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job job_1424003606313_0012 =3D 5343207. Number of splits =3D 5 2015-02-19 19:56:07,557 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces = for job job_1424003606313_0012 =3D 1 2015-02-19 19:56:07,557 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1424003606313_0012Job Transitioned from NEW to INITED 2015-02-19 19:56:07,558 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching normal, non-uberized, multi-container job job_1424003606313_0012. 2015-02-19 19:56:07,618 INFO [main] = org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue 2015-02-19 19:56:07,630 INFO [Socket Reader #1 for port 46841] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 46841 2015-02-19 19:56:07,648 INFO [main] org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the = server 2015-02-19 19:56:07,648 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2015-02-19 19:56:07,649 INFO [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated MRClientService at hadoop0.rdpratti.com/192.168.2.253:46841 2015-02-19 19:56:07,650 INFO [IPC Server listener on 46841] org.apache.hadoop.ipc.Server: IPC Server listener on 46841: starting 2015-02-19 19:56:07,721 INFO [main] org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2015-02-19 19:56:07,727 INFO [main] = org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.mapreduce is not defined 2015-02-19 19:56:07,739 INFO [main] org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=3Dorg.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2015-02-19 19:56:07,745 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=3Dorg.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context mapreduce 2015-02-19 19:56:07,745 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=3Dorg.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context static 2015-02-19 19:56:07,749 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /mapreduce/* 2015-02-19 19:56:07,749 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /ws/* 2015-02-19 19:56:07,760 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound to port 39939 2015-02-19 19:56:07,760 INFO [main] org.mortbay.log: = jetty-6.1.26.cloudera.4 2015-02-19 19:56:07,789 INFO [main] org.mortbay.log: Extract jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yar= n-c ommon-2.5.0-cdh5.3.0.jar!/webapps/mapreduce to /tmp/Jetty_0_0_0_0_39939_mapreduce____.o5qk0w/webapp 2015-02-19 19:56:08,156 INFO [main] org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:39939 2015-02-19 19:56:08,157 INFO [main] = org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce started at 39939 2015-02-19 19:56:08,629 INFO [main] = org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules 2015-02-19 19:56:08,634 INFO [main] = org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue 2015-02-19 19:56:08,635 INFO [Socket Reader #1 for port 43858] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 43858 2015-02-19 19:56:08,639 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2015-02-19 19:56:08,642 INFO [IPC Server listener on 43858] org.apache.hadoop.ipc.Server: IPC Server listener on 43858: starting 2015-02-19 19:56:08,663 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true 2015-02-19 19:56:08,663 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3 2015-02-19 19:56:08,663 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33 2015-02-19 19:56:08,797 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: hadoop.ssl.require.client.cert; Ignoring. 2015-02-19 19:56:08,798 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 2015-02-19 19:56:08,798 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: hadoop.ssl.client.conf; Ignoring. 2015-02-19 19:56:08,798 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: hadoop.ssl.keystores.factory.class; Ignoring. 2015-02-19 19:56:08,799 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: hadoop.ssl.server.conf; Ignoring. 2015-02-19 19:56:08,809 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 2015-02-19 19:56:08,821 INFO [main] = org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at quickstart.cloudera/192.168.2.185:8030 2015-02-19 19:56:08,975 WARN [main] org.apache.hadoop.security.UserGroupInformation: = PriviledgedActionException as:cloudera (auth:SIMPLE) cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.to= ken .SecretManager$InvalidToken): appattempt_1424003606313_0012_000002 not = found in AMRMTokenSecretManager. 2015-02-19 19:56:08,976 WARN [main] org.apache.hadoop.ipc.Client: = Exception encountered while connecting to the server : org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se= cre tManager$InvalidToken): appattempt_1424003606313_0012_000002 not found = in AMRMTokenSecretManager. 2015-02-19 19:56:08,976 WARN [main] org.apache.hadoop.security.UserGroupInformation: = PriviledgedActionException as:cloudera (auth:SIMPLE) cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.to= ken .SecretManager$InvalidToken): appattempt_1424003606313_0012_000002 not = found in AMRMTokenSecretManager. 2015-02-19 19:56:08,981 ERROR [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Exception = while registering org.apache.hadoop.security.token.SecretManager$InvalidToken: appattempt_1424003606313_0012_000002 not found in = AMRMTokenSecretManager. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc= ces sorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst= ruc torAccessorImpl.java:45) at = java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53) at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:1= 04) at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntI mpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.java:= 109 ) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :57 ) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl .java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc= ati onHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH= and ler.java:102) at com.sun.proxy.$Proxy36.registerApplicationMaster(Unknown = Source) at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunica= tor .java:161) at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommu= nic ator.java:122) at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(R= MCo ntainerAllocator.java:238) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)= at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.s= erv iceStart(MRAppMaster.java:807) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)= at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.= jav a:120) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.j= ava :1075) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)= at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:147= 8) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .ja va:1642) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAp= pMa ster.java:1474) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1407= ) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se= cre tManager$InvalidToken): appattempt_1424003606313_0012_000002 not found = in AMRMTokenSecretManager. at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.= jav a:206) at com.sun.proxy.$Proxy35.registerApplicationMaster(Unknown = Source) at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntI mpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.java:= 106 ) ... 22 more 2015-02-19 19:56:08,983 INFO [main] org.apache.hadoop.service.AbstractService: Service RMCommunicator failed = in state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.security.token.SecretManager$InvalidToken: appattempt_1424003606313_0012_000002 not found in = AMRMTokenSecretManager. org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.security.token.SecretManager$InvalidToken: appattempt_1424003606313_0012_000002 not found in = AMRMTokenSecretManager. =20 =20 =20 =20 From: Ulul [mailto:hadoop@ulul.org]=20 Sent: Thursday, February 19, 2015 5:08 PM To: user@hadoop.apache.org Subject: Re: Yarn AM is abending job when submitting a remote job to = cluster =20 Is your point is that using the hdfs:// prefix is valid since our hdfs client works ? fs.defaultFS defines the namenode address and the filesystem type. It = doen't imply that the prefix should be used for yarn and mapreduce options that = are not directly linked to hdfs=20 Le 19/02/2015 22:56, Ulul a =E9crit : In that case it's just between your hdfs client, the NN and the DNs, no = YARN or MR component involved. The fact that this works is not related to your MR job not succeeding. Le 19/02/2015 22:45, roland.depratti a =E9crit : Thanks for looking at my problem. =20 I can run an hdfs command from the client, with the config file listed, = that does a cat on a file in hdfs on the remote cluster and returns the = contents of that file to the client. =20 - rd =20 =20 Sent from my Verizon Wireless 4G LTE smartphone -------- Original message -------- From: Ulul =20 Date:02/19/2015 4:03 PM (GMT-05:00)=20 To: user@hadoop.apache.org=20 Subject: Re: Yarn AM is abending job when submitting a remote job to = cluster Hi Doesn't seem like an ssl error to me (the log states that attempts to=20 override final properties are ignored) On the other hand the configuration seems wrong=20 :mapreduce.jobtracker.address and yarn.resourcemanager.address should=20 only contain an IP or a hostname. You should remove 'hdfs://' though the = log doesn't suggest it has anything to do with your problem.... And what do you mean by an "HDFS job" ? Ulul Le 19/02/2015 04:22, daemeon reiydelle a =E9crit : > I would guess you do not have your ssl certs set up, client or server, = > based on the error. > > *** > ....... > ***=93Life should not be a journey to the grave with the intention of=20 > arriving safely in a > pretty and well preserved body, but rather to skid in broadside in a=20 > cloud of smoke, > thoroughly used up, totally worn out, and loudly proclaiming =93Wow!=20 > What a Ride!=94* > - Hunter Thompson > > Daemeon C.M. Reiydelle > USA (+1) 415.501.0198 > London (+44) (0) 20 8144 9872*/ > / > > On Wed, Feb 18, 2015 at 5:19 PM, Roland DePratti=20 > > wrote: > > I have been searching for a handle on a problem without very > little clues. Any help pointing me to the right direction will be > huge. > > I have not received any input form the Cloudera google groups. > Perhaps this is more Yarn based and I am hoping I have more luck = here. > > Any help is greatly appreciated. > > I am running a Hadoop cluster using CDH5.3. I also have a client > machine with a standalone one node setup (VM). > > All environments are running CentOS 6.6. > > I have submitted some Java mapreduce jobs locally on both the > cluster and the standalone environment with successfully = completions. > > I can submit a remote HDFS job from client to cluster using -conf > hadoop-cluster.xml (see below) and get data back from the cluster > with no problem. > > When submitted remotely the mapreduce jobs remotely, I get an AM > error: > > AM fails the job with the error: > > > SecretManager$InvalidToken: > appattempt_1424003606313_0001_000002 not found in > AMRMTokenSecretManager > > > I searched /var/log/secure on the client and cluster with no > unusual messages. > > Here is the contents of hadoop-cluster.xml: > > > > > > > fs.defaultFS > hdfs://mycluser:8020 > > > mapreduce.jobtracker.address > hdfs://mycluster:8032 > > > yarn.resourcemanager.address > hdfs://mycluster:8032 > > > Here is the output from the job log on the cluster: > > 2015-02-15 07:51:06,544 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created > MRAppMaster for application appattempt_1424003606313_0001_000002 > > 2015-02-15 07:51:06,949 WARN [main] > org.apache.hadoop.conf.Configuration: job.xml:an attempt to > override final parameter: hadoop.ssl.require.client.cert; = Ignoring. > > 2015-02-15 07:51:06,952 WARN [main] > org.apache.hadoop.conf.Configuration: job.xml:an attempt to > override final parameter: > mapreduce.job.end-notification.max.retry.interval; Ignoring. > > 2015-02-15 07:51:06,952 WARN [main] > org.apache.hadoop.conf.Configuration: job.xml:an attempt to > override final parameter: hadoop.ssl.client.conf; Ignoring. > > 2015-02-15 07:51:06,954 WARN [main] > org.apache.hadoop.conf.Configuration: job.xml:an attempt to > override final parameter: hadoop.ssl.keystores.factory.class;=20 > Ignoring. > > 2015-02-15 07:51:06,957 WARN [main] > org.apache.hadoop.conf.Configuration: job.xml:an attempt to > override final parameter: hadoop.ssl.server.conf; Ignoring. > > 2015-02-15 07:51:06,973 WARN [main] > org.apache.hadoop.conf.Configuration: job.xml:an attempt to > override final parameter: > mapreduce.job.end-notification.max.attempts; Ignoring. > > 2015-02-15 07:51:07,241 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with = tokens: > > 2015-02-15 07:51:07,241 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: > YARN_AM_RM_TOKEN, Service: , Ident: > (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@33be1aa0) > > 2015-02-15 07:51:07,332 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred > newApiCommitter. > > 2015-02-15 07:51:07,627 WARN [main] > org.apache.hadoop.conf.Configuration: job.xml:an attempt to > override final parameter: hadoop.ssl.require.client.cert; = Ignoring. > > 2015-02-15 07:51:07,632 WARN [main] > org.apache.hadoop.conf.Configuration: job.xml:an attempt to > override final parameter: > mapreduce.job.end-notification.max.retry.interval; Ignoring. > > 2015-02-15 07:51:07,632 WARN [main] > org.apache.hadoop.conf.Configuration: job.xml:an attempt to > override final parameter: hadoop.ssl.client.conf; Ignoring. > > 2015-02-15 07:51:07,639 WARN [main] > org.apache.hadoop.conf.Configuration: job.xml:an attempt to > override final parameter: hadoop.ssl.keystores.factory.class;=20 > Ignoring. > > 2015-02-15 07:51:07,645 WARN [main] > org.apache.hadoop.conf.Configuration: job.xml:an attempt to > override final parameter: hadoop.ssl.server.conf; Ignoring. > > 2015-02-15 07:51:07,663 WARN [main] > org.apache.hadoop.conf.Configuration: job.xml:an attempt to > override final parameter: > mapreduce.job.end-notification.max.attempts; Ignoring. > > 2015-02-15 07:51:08,237 WARN [main] > org.apache.hadoop.util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java > classes where applicable > > 2015-02-15 07:51:08,429 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter > set in config null > > 2015-02-15 07:51:08,499 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is > org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter > > 2015-02-15 07:51:08,526 INFO [main] > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class > org.apache.hadoop.mapreduce.jobhistory.EventType for class > org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler > > 2015-02-15 07:51:08,527 INFO [main] > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class > org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for > class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher > > 2015-02-15 07:51:08,561 INFO [main] > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class > org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for > class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher > > 2015-02-15 07:51:08,562 INFO [main] > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class > org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType > for class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher= > > 2015-02-15 07:51:08,566 INFO [main] > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for > class = org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler > > 2015-02-15 07:51:08,568 INFO [main] > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class > org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType > for class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher > > 2015-02-15 07:51:08,568 INFO [main] > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class > org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType > for class > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter > > 2015-02-15 07:51:08,570 INFO [main] > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class > org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType > for class > = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter > > 2015-02-15 07:51:08,599 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Recovery is > enabled. Will try to recover from previous life on best effort = basis. > > 2015-02-15 07:51:08,642 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Previous history > file is at > hdfs://mycluster.com:8020/user/cloudera/.staging/job_1424003606313_0001/j= ob_ 1424003606313_0001_1.jhist > > > _2015-02-15 > _07:51:09,147 > INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Read > completed tasks from history 0 > > 2015-02-15 07:51:09,193 INFO [main] > org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class > org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type > for class > = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler > > 2015-02-15 07:51:09,222 INFO [main] > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties > from hadoop-metrics2.properties > > 2015-02-15 07:51:09,277 INFO [main] =20 =20 =20 ------=_NextPart_001_0059_01D04E73.24958A50 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable

Ulul,

 

I appreciate your help and trying my use case.=A0 I think I have a = lot of good details for you.

 

Here is my commands:

 

hadoop jar = avgwordlength.jar solution.AvgWordLength -conf ~/conf/hadoop-cluster.xml = /user/cloudera/shakespeare wordlengths7.

Since my = last email, I examined the syslogs ( I ran both jobs with debug turned = on) for both the remote abend and the local successful run on the = cluster server.

I have = attached both logs, plus a file where I posted my manual comparison = findings, and the config xml file

Briefly, = here is what I found (more details in Comparison Log w/ Notes = file):

1.     Both logs = follow the same steps with same outcome from the beginning to line = 1590.

2.     At line = 1590 both logs record a AMRMTokenSelector Looking for Token with = service

-       = The = successful job does this on the cluster server (192.168.2.253) since it = was run locally.

-       = The = abending job does this on the client vm = (192.168.2.185)

3.     After that = point the logs are not the same until JobHistory kicks = in

-       = The = abending log spends a lot of time trying to handle the = error

-       = The = successful job begins processing the job.

o    At line = 1615 it setup the queue (root.cloudera)

o    At line = 1651 JOB_SETUP_Complete is reported.

o    Both of = these messages do not appear in the abended log.

 <= /o:p>

My guess is = this a setup problem that I produced – I just can’t find = it.

 <= /o:p>

-       = =A0rd

 

 

From: Ulul [mailto:hadoop@ulul.org]
Sent: Saturday, February = 21, 2015 9:50 PM
To: user@hadoop.apache.org
Subject: = Re: Yarn AM is abending job when submitting a remote job to = cluster

 

Hi Roland

I = tried to reproduce your problem with a single node setup submitting a = job to a remote cluster (please note I'm an HDP user, it's a sandbox = submitting to a 3 VMs cluster)
It worked like a charm...
I run = into problems when submitting the job from another user but with a = permission problem, it does not look like your AMRMToken = problem.

We are probably submitting our jobs differently though. = I use hadoop jar --config <conf dir>, you seem to be using = something different since you have the -conf generic option

Would = you please share your job command ?

Ulul =

Le 20/02/2015 03:09, = Roland DePratti a =E9crit :

Xuan,

 

Thanks for asking. Here is the RM log. It almost looks like the log = completes successfully (see red highlighting).

 

 

 

2015-02-19 = 19:55:43,315 INFO = org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated = new applicationId: 12
2015-02-19 19:55:44,659 INFO = org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: = Application with id 12 submitted by user cloudera
2015-02-19 = 19:55:44,659 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing = application with id application_1424003606313_0012
2015-02-19 = 19:55:44,659 INFO = org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: = USER=3Dcloudera    IP=3D192.168.2.185    = OPERATION=3DSubmit Application Request    = TARGET=3DClientRMService    = RESULT=3DSUCCESS    = APPID=3Dapplication_1424003606313_0012
2015-02-19 19:55:44,659 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: = application_1424003606313_0012 State change from NEW to = NEW_SAVING
2015-02-19 19:55:44,659 INFO = org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: = Storing info for app: application_1424003606313_0012
2015-02-19 = 19:55:44,660 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: = application_1424003606313_0012 State change from NEW_SAVING to = SUBMITTED
2015-02-19 19:55:44,666 INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Accepted application application_1424003606313_0012 from user: = cloudera, in queue: default, currently num of applications: = 1
2015-02-19 19:55:44,667 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: = application_1424003606313_0012 State change from SUBMITTED to = ACCEPTED
2015-02-19 19:55:44,667 INFO = org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: = Registering app attempt : = appattempt_1424003606313_0012_000001
2015-02-19 19:55:44,667 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: appattempt_1424003606313_0012_000001 State change from NEW to = SUBMITTED
2015-02-19 19:55:44,667 INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Added Application Attempt appattempt_1424003606313_0012_000001 to = scheduler from user: cloudera
2015-02-19 19:55:44,669 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: appattempt_1424003606313_0012_000001 State change from SUBMITTED to = SCHEDULED
2015-02-19 19:55:50,671 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_01_000001 Container Transitioned from NEW = to ALLOCATED
2015-02-19 19:55:50,671 INFO = org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: = USER=3Dcloudera    OPERATION=3DAM Allocated = Container    TARGET=3DSchedulerApp    = RESULT=3DSUCCESS    = APPID=3Dapplication_1424003606313_0012    = CONTAINERID=3Dcontainer_1424003606313_0012_01_000001
2015-02-19 = 19:55:50,671 INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: = Assigned container container_1424003606313_0012_01_000001 of capacity = <memory:1024, vCores:1> on host hadoop0.rdpratti.com:8041, which = has 1 containers, <memory:1024, vCores:1> used and <memory:433, = vCores:1> available after allocation
2015-02-19 19:55:50,672 INFO = org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManag= erInRM: Sending NMToken for nodeId : hadoop0.rdpratti.com:8041 for = container : container_1424003606313_0012_01_000001
2015-02-19 = 19:55:50,672 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_01_000001 Container Transitioned from = ALLOCATED to ACQUIRED
2015-02-19 19:55:50,673 INFO = org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManag= erInRM: Clear node set for = appattempt_1424003606313_0012_000001
2015-02-19 19:55:50,673 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: Storing attempt: AppId: application_1424003606313_0012 AttemptId: = appattempt_1424003606313_0012_000001 MasterContainer: Container: = [ContainerId: container_1424003606313_0012_01_000001, NodeId: = hadoop0.rdpratti.com:8041, NodeHttpAddress: hadoop0.rdpratti.com:8042, = Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { = kind: ContainerToken, service: 192.168.2.253:8041 }, ]
2015-02-19 = 19:55:50,673 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: appattempt_1424003606313_0012_000001 State change from SCHEDULED to = ALLOCATED_SAVING
2015-02-19 19:55:50,673 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: appattempt_1424003606313_0012_000001 State change from = ALLOCATED_SAVING to ALLOCATED
2015-02-19 19:55:50,673 INFO = org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: = Launching masterappattempt_1424003606313_0012_000001
2015-02-19 = 19:55:50,674 INFO = org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: = Setting up container Container: [ContainerId: = container_1424003606313_0012_01_000001, NodeId: = hadoop0.rdpratti.com:8041, NodeHttpAddress: hadoop0.rdpratti.com:8042, = Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { = kind: ContainerToken, service: 192.168.2.253:8041 }, ] for AM = appattempt_1424003606313_0012_000001
2015-02-19 19:55:50,675 INFO = org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: = Command to launch container container_1424003606313_0012_01_000001 : = $JAVA_HOME/bin/java -Dlog4j.configuration=3Dcontainer-log4j.properties = -Dyarn.app.container.log.dir=3D<LOG_DIR> = -Dyarn.app.container.log.filesize=3D0 = -Dhadoop.root.logger=3DINFO,CLA  -Djava.net.preferIPv4Stack=3Dtrue = -Xmx209715200 org.apache.hadoop.mapreduce.v2.app.MRAppMaster = 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr
2015-02-19 = 19:55:50,675 INFO = org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretMan= ager: Create AMRMToken for ApplicationAttempt: = appattempt_1424003606313_0012_000001
2015-02-19 19:55:50,675 INFO = org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretMan= ager: Creating password for = appattempt_1424003606313_0012_000001
2015-02-19 19:55:50,688 INFO = org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: = Done launching container Container: [ContainerId: = container_1424003606313_0012_01_000001, NodeId: = hadoop0.rdpratti.com:8041, NodeHttpAddress: hadoop0.rdpratti.com:8042, = Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { = kind: ContainerToken, service: 192.168.2.253:8041 }, ] for AM = appattempt_1424003606313_0012_000001
20= 15-02-19 19:55:50,688 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: appattempt_1424003606313_0012_000001 State change from ALLOCATED to = LAUNCHED
2015-02-19 19:55:50,928 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_01_000001 Container Transitioned from = ACQUIRED to RUNNING
2015-02-19 19:55:57,941 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_01_000001 Container Transitioned from = RUNNING to COMPLETED
2015-02-19 19:55:57,941 INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt= : Completed container: container_1424003606313_0012_01_000001 in state: = COMPLETED event:FINISHED
2015-02-19 19:55:57,942 INFO = org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: = USER=3Dcloudera    OPERATION=3DAM Released = Container    TARGET=3DSchedulerApp    = RESULT=3DSUCCESS    = APPID=3Dapplication_1424003606313_0012    = CONTAINERID=3Dcontainer_1424003606313_0012_01_000001
2015-02-19 = 19:55:57,942 INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: = Released container container_1424003606313_0012_01_000001 of capacity = <memory:1024, vCores:1> on host hadoop0.rdpratti.com:8041, which = currently has 0 containers, <memory:0, vCores:0> used and =
<memory:1= 457, vCores:2> available, release resources=3Dtrue
2015-02-19 = 19:55:57,942 INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Application attempt appattempt_1424003606313_0012_000001 released = container container_1424003606313_0012_01_000001 on node: host: = hadoop0.rdpratti.com:8041 #containers=3D0 available=3D1457 used=3D0 with = event: FINISHED
2015-02-19 19:55:57,942 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: Updating application attempt appattempt_1424003606313_0012_000001 = with final state: FAILED, and exit status: 1
2015-02-19 19:55:57,942 = INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: appattempt_1424003606313_0012_000001 State change from LAUNCHED to = FINAL_SAVING
2015-02-19 19:55:57,942 INFO = org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: = Unregistering app attempt : = appattempt_1424003606313_0012_000001
2015-02-19 19:55:57,943 INFO = org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretMan= ager: Application finished, removing password for = appattempt_1424003606313_0012_000001
2015-02-19 19:55:57,943 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: appattempt_1424003606313_0012_000001 State change from FINAL_SAVING = to FAILED
2015-02-19 19:55:57,943 INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Application appattempt_1424003606313_0012_000001 is done. = finalState=3DFAILED
2015-02-19 19:55:57,943 INFO = org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: = Registering app attempt : = appattempt_1424003606313_0012_000002
2015-02-19 19:55:57,943 INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo= : Application application_1424003606313_0012 requests = cleared
2015-02-19 19:55:57,943 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: appattempt_1424003606313_0012_000002 State change from NEW to = SUBMITTED
2015-02-19 19:55:57,943 INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Added Application Attempt appattempt_1424003606313_0012_000002 to = scheduler from user: cloudera
2015-02-19 19:55:57,943 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: appattempt_1424003606313_0012_000002 State change from SUBMITTED to = SCHEDULED
2015-02-19 19:55:58,941 INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Null container completed...
2015-02-19 19:56:03,950 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_02_000001 Container Transitioned from NEW = to ALLOCATED
2015-02-19 19:56:03,950 INFO = org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: = USER=3Dcloudera    OPERATION=3DAM Allocated = Container    TARGET=3DSchedulerApp    = RESULT=3DSUCCESS    = APPID=3Dapplication_1424003606313_0012    = CONTAINERID=3Dcontainer_1424003606313_0012_02_000001
2015-02-19 = 19:56:03,950 INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: = Assigned container container_1424003606313_0012_02_000001 of capacity = <memory:1024, vCores:1> on host hadoop0.rdpratti.com:8041, which = has 1 containers, <memory:1024, vCores:1> used and <memory:433, = vCores:1> available after allocation
2015-02-19 19:56:03,950 INFO = org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManag= erInRM: Sending NMToken for nodeId : hadoop0.rdpratti.com:8041 for = container : container_1424003606313_0012_02_000001
2015-02-19 = 19:56:03,951 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_02_000001 Container Transitioned from = ALLOCATED to ACQUIRED
2015-02-19 19:56:03,951 INFO = org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManag= erInRM: Clear node set for = appattempt_1424003606313_0012_000002
2015-02-19 19:56:03,951 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: Storing attempt: AppId: application_1424003606313_0012 AttemptId: = appattempt_1424003606313_0012_000002 MasterContainer: Container: = [ContainerId: container_1424003606313_0012_02_000001, NodeId: = hadoop0.rdpratti.com:8041, NodeHttpAddress: hadoop0.rdpratti.com:8042, = Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { = kind: ContainerToken, service: 192.168.2.253:8041 }, ]
2015-02-19 = 19:56:03,952 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: appattempt_1424003606313_0012_000002 State change from SCHEDULED to = ALLOCATED_SAVING
2015-02-19 19:56:03,952 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: appattempt_1424003606313_0012_000002 State change from = ALLOCATED_SAVING to ALLOCATED
2015-02-19 19:56:03,952 INFO = org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: = Launching masterappattempt_1424003606313_0012_000002
2015-02-19 = 19:56:03,953 INFO = org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: = Setting up container Container: [ContainerId: = container_1424003606313_0012_02_000001, NodeId: = hadoop0.rdpratti.com:8041, NodeHttpAddress: hadoop0.rdpratti.com:8042, = Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { = kind: ContainerToken, service: 192.168.2.253:8041 }, ] for AM = appattempt_1424003606313_0012_000002
2015-02-19 19:56:03,953 INFO = org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: = Command to launch container container_1424003606313_0012_02_000001 : = $JAVA_HOME/bin/java -Dlog4j.configuration=3Dcontainer-log4j.properties = -Dyarn.app.container.log.dir=3D<LOG_DIR> = -Dyarn.app.container.log.filesize=3D0 = -Dhadoop.root.logger=3DINFO,CLA  -Djava.net.preferIPv4Stack=3Dtrue = -Xmx209715200 org.apache.hadoop.mapreduce.v2.app.MRAppMaster = 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr
2015-02-19 = 19:56:03,953 INFO = org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretMan= ager: Create AMRMToken for ApplicationAttempt: = appattempt_1424003606313_0012_000002
2015-02-19 19:56:03,953 INFO = org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretMan= ager: Creating password for = appattempt_1424003606313_0012_000002
2015-02-19 19:56:03,974 INFO = org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: = Done launching container Container: [ContainerId: = container_1424003606313_0012_02_000001, NodeId: = hadoop0.rdpratti.com:8041, NodeHttpAddress: hadoop0.rdpratti.com:8042, = Resource: <memory:1024, vCores:1>, Priority: 0, Token: Token { = kind: ContainerToken, service: 192.168.2.253:8041 }, ] for AM = appattempt_1424003606313_0012_000002
2015-02-19 19:56:03,974 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: appattempt_1424003606313_0012_000002 State change from ALLOCATED to = LAUNCHED
2015-02-19 19:56:04,947 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_02_000001 Container Transitioned from = ACQUIRED to RUNNING
2015-02-19 19:56:10,956 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl= : container_1424003606313_0012_02_000001 Container Transitioned from = RUNNING to COMPLETED
2015-02-19 19:56:10,956 INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt= : Completed container: container_1424003606313_0012_02_000001 in state: = COMPLETED event:FINISHED
2015-02-19 19:56:10,956 INFO = org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: = USER=3Dcloudera    OPERATION=3DAM Released = Container    TARGET=3DSchedulerApp    = RESULT=3DSUCCESS    = APPID=3Dapplication_1424003606313_0012    = CONTAINERID=3Dcontainer_1424003606313_0012_02_000001
2015-02-19 = 19:56:10,956 INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: = Released container container_1424003606313_0012_02_000001 of capacity = <memory:1024, vCores:1> on host hadoop0.rdpratti.com:8041, which = currently has 0 containers, <memory:0, vCores:0> used and = <memory:1457, vCores:2> available, release = resources=3Dtrue
2015-02-19 19:56:10,956 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: Updating application attempt appattempt_1424003606313_0012_000002 = with final state: FAILED, and exit status: 1
2015-02-19 19:56:10,956 = INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Application attempt appattempt_1424003606313_0012_000002 released = container container_1424003606313_0012_02_000001 on node: host: = hadoop0.rdpratti.com:8041 #containers=3D0 available=3D1457 used=3D0 with = event: FINISHED
2015-02-19 19:56:10,956 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: appattempt_1424003606313_0012_000002 State change from LAUNCHED to = FINAL_SAVING
2015-02-19 19:56:10,956 INFO = org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: = Unregistering app attempt : = appattempt_1424003606313_0012_000002
2015-02-19 19:56:10,957 INFO = org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretMan= ager: Application finished, removing password for = appattempt_1424003606313_0012_000002
2015-02-19 19:56:10,957 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptI= mpl: appattempt_1424003606313_0012_000002 State change from FINAL_SAVING = to FAILED
2015-02-19 19:56:10,957 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating = application application_1424003606313_0012 with final state: = FAILED
2015-02-19 19:56:10,957 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: = application_1424003606313_0012 State change from ACCEPTED to = FINAL_SAVING
2015-02-19 19:56:10,957 INFO = org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: = Updating info for app: application_1424003606313_0012
2015-02-19 = 19:56:10,957 INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedule= r: Application appattempt_1424003606313_0012_000002 is done. = finalState=3DFAILED
2015-02-19 19:56:10,957 INFO = org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo= : Application application_1424003606313_0012 requests = cleared
2015-02-19 19:56:10,990 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: = Application application_1424003606313_0012 failed 2 times due to AM = Container for appattempt_1424003606313_0012_000002 exited with  = exitCode: 1 due to: Exception from container-launch.
Container id: = container_1424003606313_0012_02_000001
Exit code: 1
Stack trace: = ExitCodeException exitCode=3D1:
    at = org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
  &n= bsp; at = org.apache.hadoop.util.Shell.run(Shell.java:455)
    = at = org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)=
    at = org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launch= Container(DefaultContainerExecutor.java:197)
    at = org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Conta= inerLaunch.call(ContainerLaunch.java:299)
    at = org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Conta= inerLaunch.call(ContainerLaunch.java:81)
    at = java.util.concurrent.FutureTask.run(FutureTask.java:262)
  &= nbsp; at = java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1145)
    at = java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:615)
    at = java.lang.Thread.run(Thread.java:745)


Container exited with a = non-zero exit code 1
.Failing this attempt.. Failing the = application.
2015-02-19 19:56:10,990 INFO = org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: = application_1424003606313_0012 State change from FINAL_SAVING to = FAILED
2015-02-19 19:56:10,991 WARN = org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: = USER=3Dcloudera    OPERATION=3DApplication Finished - = Failed    TARGET=3DRMAppManager    = RESULT=3DFAILURE    DESCRIPTION=3DApp failed with state: = FAILED    PERMISSIONS=3DApplication = application_1424003606313_0012 failed 2 times due to AM Container for = appattempt_1424003606313_0012_000002 exited with  exitCode: 1 due = to: Exception from container-launch.
Container id: = container_1424003606313_0012_02_000001
Exit code: 1
Stack trace: = ExitCodeException exitCode=3D1:
    at = org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
  &n= bsp; at = org.apache.hadoop.util.Shell.run(Shell.java:455)
    = at = org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)=
    at = org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launch= Container(DefaultContainerExecutor.java:197)
    at = org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Conta= inerLaunch.call(ContainerLaunch.java:299)
    at = org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Conta= inerLaunch.call(ContainerLaunch.java:81)
    at = java.util.concurrent.FutureTask.run(FutureTask.java:262)
  &= nbsp; at = java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1145)
    at = java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:615)
    at = java.lang.Thread.run(Thread.java:745)


 

 

From: Xuan Gong [mailto:xgong@hortonworks.com] =
Sent: Thursday, February 19, 2015 8:23 PM
To: user@hadoop.apache.org
S= ubject: Re: Yarn AM is abending job when submitting a remote job to = cluster

 

Hey, <= /span>Roland:

  =   Could you also check the RM logs for this application, please ? = Maybe we could find something there.

 

Thanks

 

Xuan = Gong

 

From: = Roland = DePratti <roland.depratti@cox.net>Reply-To: "user@hadoop.apache.org" = <user@hadoop.apache.org>
= Date: Thursday, February 19, 2015 at 5:11 PM
To: = "user@hadoop.apache.org" = <user@hadoop.apache.org>
= Subject: RE: Yarn AM is abending job when submitting a remote job = to cluster

 

No, I hear you. 

 

I was just stating that the fact that hdfs works, there is something = right about the connectivity, that’s all, i.e. Server is = reachable, hadoop was able to process the request – but like you = said, doesn’t mean yarn works.

 

I tried both your solution and Alex’s solution unfortunately = without any improvement.

 

Here is the command I am executing:

 

hadoop jar avgWordlength.jar  solution.AvgWordLength -conf = ~/conf/hadoop-cluster.xml /user/cloudera/shakespeare = wordlength4

 

Here is the new hadoop-cluseter.xml

 

<?xml = version=3D"1.0" = encoding=3D"UTF-8"?>

<!--generated by = Roland-->
<configuration>
  = <property>
    = <name>fs.defaultFS</name>
    = <value>hdfs://hadoop0.rdpratti.com:8020</value>
  = </property>
  <property>
    = <name>mapreduce.jobtracker.address</name>
  &nbs= p; <value>hadoop0.rdpratti.com:8032</value>
  = </property>
  <property>
    = <name>yarn.resourcemanager.address</name>
  &nbs= p; <value>hadoop0.rdpratti.com:8032</value>
  = </property>



I also deleted the .staging directory under the submitting user. Plus = restarted Job History Server.

 

Resubmitted the job with the same result. Here is the = log:

 

2015-02-19 19:56:05,061 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for =
application =
appattempt_1424003606313_0012_000002
2015-02-19 =
19:56:05,468 WARN [main] org.apache.hadoop.conf.Configuration: =
job.xml:an attempt to override final parameter: =
hadoop.ssl.require.client.cert;  =
Ignoring.
2015-02-19 19:56:05,471 WARN [main] =
org.apache.hadoop.conf.Configuration: job.xml:an attempt to override =
final parameter: =
mapreduce.job.end-notification.max.retry.interval;  =
Ignoring.
2015-02-19 19:56:05,471 WARN [main] =
org.apache.hadoop.conf.Configuration: job.xml:an attempt to override =
final parameter: hadoop.ssl.client.conf;  =
Ignoring.
2015-02-19 19:56:05,473 WARN [main] =
org.apache.hadoop.conf.Configuration: job.xml:an attempt to override =
final parameter: hadoop.ssl.keystores.factory.class;  =
Ignoring.
2015-02-19 19:56:05,476 WARN [main] =
org.apache.hadoop.conf.Configuration: job.xml:an attempt to override =
final parameter: hadoop.ssl.server.conf;  =
Ignoring.
2015-02-19 19:56:05,490 WARN [main] =
org.apache.hadoop.conf.Configuration: job.xml:an attempt to override =
final parameter: mapreduce.job.end-notification.max.attempts;  =
Ignoring.
2015-02-19 19:56:05,621 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with =
tokens:
2015-02-19 19:56:05,621 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, =
Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@3909f88f)
2015-02-19 19:56:05,684 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred =
newApiCommitter.
2015-02-19 19:56:05,923 WARN =
[main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to =
override final parameter: hadoop.ssl.require.client.cert;  =
Ignoring.
2015-02-19 19:56:05,925 WARN [main] =
org.apache.hadoop.conf.Configuration: job.xml:an attempt to override =
final parameter: =
mapreduce.job.end-notification.max.retry.interval;  =
Ignoring.
2015-02-19 19:56:05,929 WARN [main] =
org.apache.hadoop.conf.Configuration: job.xml:an attempt to override =
final parameter: hadoop.ssl.client.conf;  =
Ignoring.
2015-02-19 19:56:05,930 WARN [main] =
org.apache.hadoop.conf.Configuration: job.xml:an attempt to override =
final parameter: hadoop.ssl.keystores.factory.class;  =
Ignoring.
2015-02-19 19:56:05,934 WARN [main] =
org.apache.hadoop.conf.Configuration: job.xml:an attempt to override =
final parameter: hadoop.ssl.server.conf;  =
Ignoring.
2015-02-19 19:56:05,958 WARN [main] =
org.apache.hadoop.conf.Configuration: job.xml:an attempt to override =
final parameter: mapreduce.job.end-notification.max.attempts;  =
Ignoring.
2015-02-19 19:56:06,529 WARN [main] =
org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop =
library for your platform... using builtin-java classes where =
applicable
2015-02-19 19:56:06,719 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in =
config null
2015-02-19 19:56:06,837 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is =
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2015-02-19 19:56:06,881 INFO [main] =
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class =
org.apache.hadoop.mapreduce.jobhistory.EventType for class =
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler<=
/pre>
2015-02-19 19:56:06,882 INFO [main] =
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class =
org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2015-02-19 19:56:06,882 INFO [main] =
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class =
org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2015-02-19 19:56:06,883 INFO [main] =
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class =
org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for =
class =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher=
2015-02-19 19:56:06,884 INFO [main] =
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class =
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class =
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2015-02-19 19:56:06,885 INFO [main] =
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class =
org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for =
class =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher<=
o:p>
2015-02-19 19:56:06,885 INFO [main] =
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class =
org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for =
class =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2015-02-19 19:56:06,886 INFO [main] =
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class =
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType =
for class =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2015-02-19 19:56:06,899 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Recovery is enabled. =
Will try to recover from previous life on best effort =
basis.
2015-02-19 19:56:06,918 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Previous history file is =
at =
hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424003606313=
_0012/job_1424003606313_0012_1.jhist
2015-02-19 =
19:56:07,377 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: =
Read completed tasks from history 0
2015-02-19 =
19:56:07,423 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: =
Registering class =
org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for =
class =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler=
2015-02-19 19:56:07,453 INFO [main] =
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from =
hadoop-metrics2.properties
2015-02-19 19:56:07,507 =
INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled =
snapshot period at 10 second(s).
2015-02-19 =
19:56:07,507 INFO [main] =
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics =
system started
2015-02-19 19:56:07,515 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token =
for job_1424003606313_0012 to =
jobTokenSecretManager
2015-02-19 19:56:07,536 INFO =
[main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not =
uberizing job_1424003606313_0012 because: not enabled; too much =
RAM;
2015-02-19 19:56:07,555 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job =
job_1424003606313_0012 =3D 5343207. Number of splits =3D =
5
2015-02-19 19:56:07,557 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces =
for job job_1424003606313_0012 =3D 1
2015-02-19 =
19:56:07,557 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: =
job_1424003606313_0012Job Transitioned from NEW to =
INITED
2015-02-19 19:56:07,558 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching =
normal, non-uberized, multi-container job =
job_1424003606313_0012.
2015-02-19 19:56:07,618 =
INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue =
class =
java.util.concurrent.LinkedBlockingQueue
2015-02-19 =
19:56:07,630 INFO [Socket Reader #1 for port 46841] =
org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port =
46841
2015-02-19 19:56:07,648 INFO [main] =
org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding =
protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the =
server
2015-02-19 19:56:07,648 INFO [IPC Server =
Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: =
starting
2015-02-19 19:56:07,649 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated =
MRClientService at =
hadoop0.rdpratti.com/192.168.2.253:46841
2015-02-19 =
19:56:07,650 INFO [IPC Server listener on 46841] =
org.apache.hadoop.ipc.Server: IPC Server listener on 46841: =
starting
2015-02-19 19:56:07,721 INFO [main] =
org.mortbay.log: Logging to =
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via =
org.mortbay.log.Slf4jLog
2015-02-19 19:56:07,727 =
INFO [main] org.apache.hadoop.http.HttpRequestLog: Http request log for =
http.requests.mapreduce is not defined
2015-02-19 =
19:56:07,739 INFO [main] org.apache.hadoop.http.HttpServer2: Added =
global filter 'safety' =
(class=3Dorg.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-02-19 19:56:07,745 INFO [main] =
org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER =
(class=3Dorg.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to =
context mapreduce
2015-02-19 19:56:07,745 INFO =
[main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER =
(class=3Dorg.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to =
context static
2015-02-19 19:56:07,749 INFO [main] =
org.apache.hadoop.http.HttpServer2: adding path spec: =
/mapreduce/*
2015-02-19 19:56:07,749 INFO [main] =
org.apache.hadoop.http.HttpServer2: adding path spec: =
/ws/*
2015-02-19 19:56:07,760 INFO [main] =
org.apache.hadoop.http.HttpServer2: Jetty bound to port =
39939
2015-02-19 19:56:07,760 INFO [main] =
org.mortbay.log: jetty-6.1.26.cloudera.4
2015-02-19 =
19:56:07,789 INFO [main] org.mortbay.log: Extract jar:file:/opt/clo=
udera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cd=
h5.3.0.jar!/webapps/mapreduce to =
/tmp/Jetty_0_0_0_0_39939_mapreduce____.o5qk0w/webapp
2015-02-19 19:56:08,156 INFO [main] org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:39939
2015-02-19 19:56:08,157 INFO [main] =
org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce started at =
39939
2015-02-19 19:56:08,629 INFO [main] =
org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice =
modules
2015-02-19 19:56:08,634 INFO [main] =
org.apache.hadoop.ipc.CallQueueManager: Using callQueue class =
java.util.concurrent.LinkedBlockingQueue
2015-02-19 =
19:56:08,635 INFO [Socket Reader #1 for port 43858] =
org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port =
43858
2015-02-19 19:56:08,639 INFO [IPC Server =
Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: =
starting
2015-02-19 19:56:08,642 INFO [IPC Server =
listener on 43858] org.apache.hadoop.ipc.Server: IPC Server listener on =
43858: starting
2015-02-19 19:56:08,663 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: =
nodeBlacklistingEnabled:true
2015-02-19 =
19:56:08,663 INFO [main] =
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: =
maxTaskFailuresPerNode is 3
2015-02-19 19:56:08,663 =
INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: =
blacklistDisablePercent is 33
2015-02-19 =
19:56:08,797 WARN [main] org.apache.hadoop.conf.Configuration: =
job.xml:an attempt to override final parameter: =
hadoop.ssl.require.client.cert;  =
Ignoring.
2015-02-19 19:56:08,798 WARN [main] =
org.apache.hadoop.conf.Configuration: job.xml:an attempt to override =
final parameter: =
mapreduce.job.end-notification.max.retry.interval;  =
Ignoring.
2015-02-19 19:56:08,798 WARN [main] =
org.apache.hadoop.conf.Configuration: job.xml:an attempt to override =
final parameter: hadoop.ssl.client.conf;  =
Ignoring.
2015-02-19 19:56:08,798 WARN [main] =
org.apache.hadoop.conf.Configuration: job.xml:an attempt to override =
final parameter: hadoop.ssl.keystores.factory.class;  =
Ignoring.
2015-02-19 19:56:08,799 WARN [main] =
org.apache.hadoop.conf.Configuration: job.xml:an attempt to override =
final parameter: hadoop.ssl.server.conf;  =
Ignoring.
2015-02-19 19:56:08,809 WARN [main] =
org.apache.hadoop.conf.Configuration: job.xml:an attempt to override =
final parameter: mapreduce.job.end-notification.max.attempts;  =
Ignoring.
2015-02-19 19:56:08,821 INFO [main] =
org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at =
quickstart.cloudera/192.168.2.185:8030
2015-02-19 =
19:56:08,975 WARN [main] =
org.apache.hadoop.security.UserGroupInformation: =
PriviledgedActionException as:cloudera (auth:SIMPLE) =
cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.to=
ken.SecretManager$InvalidToken): appattempt_1424003606313_0012_000002 =
not found in AMRMTokenSecretManager.
2015-02-19 =
19:56:08,976 WARN [main] org.apache.hadoop.ipc.Client: Exception =
encountered while connecting to the server : =
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se=
cretManager$InvalidToken): appattempt_1424003606313_0012_000002 not =
found in AMRMTokenSecretManager.
2015-02-19 =
19:56:08,976 WARN [main] =
org.apache.hadoop.security.UserGroupInformation: =
PriviledgedActionException as:cloudera (auth:SIMPLE) =
cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.to=
ken.SecretManager$InvalidToken): appattempt_1424003606313_0012_000002 =
not found in AMRMTokenSecretManager.
2015-02-19 =
19:56:08,981 ERROR [main] =
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Exception =
while =
registering
org.apache.hadoop.security.token.SecretM=
anager$InvalidToken: appattempt_1424003606313_0012_000002 not found in =
AMRMTokenSecretManager.
    &nbs=
p;   at =
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native =
Method)
        =
at =
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc=
cessorImpl.java:57)
     &n=
bsp;  at =
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst=
ructorAccessorImpl.java:45)
    =
    at =
java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at =
org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)<=
o:p>
        at =
org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:1=
04)
        at =
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie=
ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja=
va:109)
        =
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native =
Method)
        =
at =
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java=
:57)
        at =
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI=
mpl.java:43)
      &nb=
sp; at =
java.lang.reflect.Method.invoke(Method.java:606)
&nb=
sp;       at =
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc=
ationHandler.java:187)
     =
;   at =
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH=
andler.java:102)
      =
;  at com.sun.proxy.$Proxy36.registerApplicationMaster(Unknown =
Source)
        =
at =
org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunica=
tor.java:161)
      &n=
bsp; at =
org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommu=
nicator.java:122)
     &nbs=
p;  at =
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(R=
MContainerAllocator.java:238)
   &nbs=
p;    at =
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=
        at =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.s=
erviceStart(MRAppMaster.java:807)
   =
     at =
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=
        at =
org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.=
java:120)
       =
 at =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.j=
ava:1075)
       =
 at =
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=
        at =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:147=
8)
        at =
java.security.AccessController.doPrivileged(Native =
Method)
        =
at =
javax.security.auth.Subject.doAs(Subject.java:415)
&=
nbsp;       at =
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation=
.java:1642)
      &nbs=
p; at =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAp=
pMaster.java:1474)
     &nb=
sp;  at =
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1407=
)
Caused by: =
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se=
cretManager$InvalidToken): appattempt_1424003606313_0012_000002 not =
found in =
AMRMTokenSecretManager.
    &nbs=
p;   at =
org.apache.hadoop.ipc.Client.call(Client.java:1411)
=
        at =
org.apache.hadoop.ipc.Client.call(Client.java:1364)
=
        at =
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.=
java:206)
       =
 at com.sun.proxy.$Proxy35.registerApplicationMaster(Unknown =
Source)
        =
at =
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie=
ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja=
va:106)
        =
... 22 more
2015-02-19 19:56:08,983 INFO [main] =
org.apache.hadoop.service.AbstractService: Service RMCommunicator failed =
in state STARTED; cause: =
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: =
org.apache.hadoop.security.token.SecretManager$InvalidToken: =
appattempt_1424003606313_0012_000002 not found in =
AMRMTokenSecretManager.
org.apache.hadoop.yarn.excep=
tions.YarnRuntimeException: =
org.apache.hadoop.security.token.SecretManager$InvalidToken: =
appattempt_1424003606313_0012_000002 not found in =
AMRMTokenSecretManager.

 

 

 

 

From: Ulul [mailto:hadoop@ulul.org] =
Sent: Thursday, February 19, 2015 5:08 PM
To: user@hadoop.apache.org
S= ubject: Re: Yarn AM is abending job when submitting a remote job to = cluster

 

Is your point is that using the hdfs:// prefix = is valid since our hdfs client works ?
fs.defaultFS defines the = namenode address and the filesystem type. It doen't imply that the = prefix should be used for yarn and mapreduce options that are not = directly linked to hdfs



Le 19/02/2015 22:56, Ulul a = =E9crit :

In that case it's just between your hdfs client, = the NN and the DNs, no YARN or MR component involved.
The fact that = this works is not related to your MR job not = succeeding.



Le 19/02/2015 22:45, roland.depratti a = =E9crit :

Thanks for looking at my = problem.

 

I = can run an hdfs command from the client, with the config file listed, = that does a cat on a file in hdfs on the remote cluster and returns the = contents of that file to the client.

 

- = rd

 

 

Sent = from my Verizon Wireless 4G LTE = smartphone



-------- Original message --------
From: = Ulul <hadoop@ulul.org> =
Date:02/19/2015 4:03 PM (GMT-05:00)
To: user@hadoop.apache.org =
Subject: Re: Yarn AM is abending job when submitting a remote job to = cluster

Hi
Doesn't seem like an ssl error to me (the log = states that attempts to
override final properties are = ignored)

On the other hand the configuration seems wrong =
:mapreduce.jobtracker.address and yarn.resourcemanager.address = should
only contain an IP or a hostname. You should remove 'hdfs://' = though the
log doesn't suggest it has anything to do with your = problem....

And what do you mean by an "HDFS job" = ?

Ulul

Le 19/02/2015 04:22, daemeon reiydelle a =E9crit = :
> I would guess you do not have your ssl certs set up, client or = server,
> based on the error.
>
> ***
> = .......
> ***“Life should not be a journey to the grave with = the intention of
> arriving safely in a
> pretty and well = preserved body, but rather to skid in broadside in a
> cloud of = smoke,
> thoroughly used up, totally worn out, and loudly = proclaiming “Wow!
> What a Ride!”*
> - Hunter = Thompson
>
> Daemeon C.M. Reiydelle
> USA (+1) = 415.501.0198
> London (+44) (0) 20 8144 9872*/
> = /
>
> On Wed, Feb 18, 2015 at 5:19 PM, Roland DePratti =
> <roland.depratti@cox.net <mailto:roland.depratti@cox.ne= t>> wrote:
>
>     I have been = searching for a handle on a problem without = very
>     little clues. Any help pointing me = to the right direction will be
>     = huge.
>
>     I have not received any = input form the Cloudera google groups.
>     = Perhaps this is more Yarn based and I am hoping I have more luck = here.
>
>     Any help is greatly = appreciated.
>
>     I am running a = Hadoop cluster using CDH5.3. I also have a = client
>     machine with a standalone one = node setup (VM).
>
>     All = environments are running CentOS = 6.6.
>
>     I have submitted some Java = mapreduce jobs locally on both the
>     = cluster and the standalone environment with successfully = completions.
>
>     I can submit a = remote HDFS job from client to cluster using = -conf
>     hadoop-cluster.xml (see below) and = get data back from the cluster
>     with no = problem.
>
>     When submitted remotely = the mapreduce jobs remotely, I get an AM
>     = error:
>
>     AM fails the job with the = error:
>
>
>       &= nbsp;        = SecretManager$InvalidToken:
>     = appattempt_1424003606313_0001_000002 not found = in
>     = AMRMTokenSecretManager
>
>
>     I = searched /var/log/secure on the client and cluster with = no
>     unusual = messages.
>
>     Here is the contents = of hadoop-cluster.xml:
>
>     <?xml = version=3D"1.0" = encoding=3D"UTF-8"?>
>
>    = <!--generated by Roland-->
>     = <configuration>
>       = <property>
>         = <name>fs.defaultFS</name>
>    &nbs= p;    = <value>hdfs://mycluser:8020</value>
>   =     = </property>
>       = <property>
>     = <name>mapreduce.jobtracker.address</name>
>  =        = <value>hdfs://mycluster:8032</value>
>   = ;    = </property>
>       = <property>
>     = <name>yarn.resourcemanager.address</name>
>  =        = <value>hdfs://mycluster:8032</value>
>   = ;    = </property>
>
>     Here is the = output from the job log on the = cluster:
>
>     2015-02-15 07:51:06,544 = INFO [main]
>     = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: = Created
>     MRAppMaster for application = appattempt_1424003606313_0001_000002
>
>   &nb= sp; 2015-02-15 07:51:06,949 WARN [main]
>     = org.apache.hadoop.conf.Configuration: job.xml:an attempt = to
>     override final parameter: = hadoop.ssl.require.client.cert;  = Ignoring.
>
>     2015-02-15 = 07:51:06,952 WARN [main]
>     = org.apache.hadoop.conf.Configuration: job.xml:an attempt = to
>     override final = parameter:
>     = mapreduce.job.end-notification.max.retry.interval; = Ignoring.
>
>     2015-02-15 = 07:51:06,952 WARN [main]
>     = org.apache.hadoop.conf.Configuration: job.xml:an attempt = to
>     override final parameter: = hadoop.ssl.client.conf;  = Ignoring.
>
>     2015-02-15 = 07:51:06,954 WARN [main]
>     = org.apache.hadoop.conf.Configuration: job.xml:an attempt = to
>     override final parameter: = hadoop.ssl.keystores.factory.class;
>     = Ignoring.
>
>     2015-02-15 = 07:51:06,957 WARN [main]
>     = org.apache.hadoop.conf.Configuration: job.xml:an attempt = to
>     override final parameter: = hadoop.ssl.server.conf;  = Ignoring.
>
>     2015-02-15 = 07:51:06,973 WARN [main]
>     = org.apache.hadoop.conf.Configuration: job.xml:an attempt = to
>     override final = parameter:
>     = mapreduce.job.end-notification.max.attempts; = Ignoring.
>
>     2015-02-15 = 07:51:07,241 INFO [main]
>     = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with = tokens:
>
>     2015-02-15 07:51:07,241 = INFO [main]
>     = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: = Kind:
>     YARN_AM_RM_TOKEN, Service: , = Ident:
>     (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@33be1aa0)
= >
>     2015-02-15 07:51:07,332 INFO = [main]
>     = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using = mapred
>     = newApiCommitter.
>
>     2015-02-15 = 07:51:07,627 WARN [main]
>     = org.apache.hadoop.conf.Configuration: job.xml:an attempt = to
>     override final parameter: = hadoop.ssl.require.client.cert;  = Ignoring.
>
>     2015-02-15 = 07:51:07,632 WARN [main]
>     = org.apache.hadoop.conf.Configuration: job.xml:an attempt = to
>     override final = parameter:
>     = mapreduce.job.end-notification.max.retry.interval; = Ignoring.
>
>     2015-02-15 = 07:51:07,632 WARN [main]
>     = org.apache.hadoop.conf.Configuration: job.xml:an attempt = to
>     override final parameter: = hadoop.ssl.client.conf;  = Ignoring.
>
>     2015-02-15 = 07:51:07,639 WARN [main]
>     = org.apache.hadoop.conf.Configuration: job.xml:an attempt = to
>     override final parameter: = hadoop.ssl.keystores.factory.class;
>     = Ignoring.
>
>     2015-02-15 = 07:51:07,645 WARN [main]
>     = org.apache.hadoop.conf.Configuration: job.xml:an attempt = to
>     override final parameter: = hadoop.ssl.server.conf;  = Ignoring.
>
>     2015-02-15 = 07:51:07,663 WARN [main]
>     = org.apache.hadoop.conf.Configuration: job.xml:an attempt = to
>     override final = parameter:
>     = mapreduce.job.end-notification.max.attempts; = Ignoring.
>
>     2015-02-15 = 07:51:08,237 WARN [main]
>     = org.apache.hadoop.util.NativeCodeLoader: Unable to = load
>     native-hadoop library for your = platform... using builtin-java
>     classes = where applicable
>
>     2015-02-15 = 07:51:08,429 INFO [main]
>     = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: = OutputCommitter
>     set in config = null
>
>     2015-02-15 07:51:08,499 = INFO [main]
>     = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter = is
>     = org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
>
>= ;     2015-02-15 07:51:08,526 INFO = [main]
>     = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering = class
>     = org.apache.hadoop.mapreduce.jobhistory.EventType for = class
>     = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>
= >     2015-02-15 07:51:08,527 INFO = [main]
>     = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering = class
>     = org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType = for
>     = class
>     = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
>=
>     2015-02-15 07:51:08,561 INFO = [main]
>     = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering = class
>     = org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType = for
>     = class
>     = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
>= ;
>     2015-02-15 07:51:08,562 INFO = [main]
>     = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering = class
>     = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType
>=      for class
>     = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher=
>
>     2015-02-15 07:51:08,566 INFO = [main]
>     = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering = class
>     = org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType = for
>     class = org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
>>     2015-02-15 07:51:08,568 INFO = [main]
>     = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering = class
>     = org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType
>=      for class
>     = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher<= br>>
>     2015-02-15 07:51:08,568 INFO = [main]
>     = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering = class
>     = org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType
>= ;     for class
>     = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter>
>     2015-02-15 07:51:08,570 INFO = [main]
>     = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering = class
>     = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType>     for class
>     = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter>
>     2015-02-15 07:51:08,599 INFO = [main]
>     = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Recovery = is
>     enabled. Will try to recover from = previous life on best effort = basis.
>
>     2015-02-15 07:51:08,642 = INFO [main]
>     = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Previous = history
>     file is = at
>     = hdfs://mycluster.com:8020/user/cloudera/.staging/job_1424003606313_0001/j= ob_1424003606313_0001_1.jhist
>     <http://mycluster.com= :8020/user/cloudera/.staging/job_1424003606313_0001/job_1424003606313_000= 1_1.jhist2015-02-15>
>
>     = _2015-02-15
>     <http://mycluster.com= :8020/user/cloudera/.staging/job_1424003606313_0001/job_1424003606313_000= 1_1.jhist2015-02-15>_07:51:09,147
>     = INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: = Read
>     completed tasks from history = 0
>
>     2015-02-15 07:51:09,193 INFO = [main]
>     = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering = class
>     = org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type
>&= nbsp;    for class
>     = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
&= gt;
>     2015-02-15 07:51:09,222 INFO = [main]
>     = org.apache.hadoop.metrics2.impl.MetricsConfig: loaded = properties
>     from = hadoop-metrics2.properties
>
>     = 2015-02-15 07:51:09,277 INFO [main]

 

 

 

------=_NextPart_001_0059_01D04E73.24958A50-- ------=_NextPart_000_0058_01D04E73.24958A50 Content-Type: application/octet-stream; name="Abended RM Log.dat" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="Abended RM Log.dat" =0A= Logged in as: dr.who=0A= About Apache Hadoop=0A= Application=0A= =0A= About=0A= Jobs =0A= =0A= Tools=0A= =0A= =0A= Log Type: syslog=0A= =0A= Log Length: 385910=0A= =0A= 2015-02-21 17:54:54,905 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JvmMetrics, JVM = related metrics etc.=0A= 2015-02-21 17:54:54,926 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.jobsSubmitted = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,938 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.jobsCompleted = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,938 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.jobsFailed with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,938 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.jobsKilled with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,939 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableGaugeInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.jobsPreparing = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,939 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableGaugeInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.jobsRunning with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,939 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.mapsLaunched = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,939 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.mapsCompleted = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,940 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.mapsFailed with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,940 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.mapsKilled with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,940 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableGaugeInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.mapsRunning with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,940 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableGaugeInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.mapsWaiting with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,940 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.reducesLaunched = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,941 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.reducesCompleted = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,941 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.reducesFailed = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,942 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.reducesKilled = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,942 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableGaugeInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.reducesRunning = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,942 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableGaugeInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.reducesWaiting = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:54,944 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMetrics, MR App = Metrics=0A= 2015-02-21 17:54:54,967 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for = application appattempt_1424550134651_0001_000001=0A= 2015-02-21 17:54:55,013 DEBUG [main] org.apache.hadoop.util.Shell: = Failed to detect a valid hadoop home directory=0A= java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.=0A= at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:302)=0A= at org.apache.hadoop.util.Shell.(Shell.java:327)=0A= at org.apache.hadoop.util.StringUtils.(StringUtils.java:79)=0A= at = org.apache.hadoop.yarn.conf.YarnConfiguration.(YarnConfiguration.= java:552)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1396= )=0A= 2015-02-21 17:54:55,021 DEBUG [main] org.apache.hadoop.util.Shell: = setsid exited with exit code 0=0A= 2015-02-21 17:54:55,402 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.require.client.cert; Ignoring.=0A= 2015-02-21 17:54:55,405 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.retry.interval; = Ignoring.=0A= 2015-02-21 17:54:55,405 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.client.conf; Ignoring.=0A= 2015-02-21 17:54:55,407 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.keystores.factory.class; Ignoring.=0A= 2015-02-21 17:54:55,410 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.server.conf; Ignoring.=0A= 2015-02-21 17:54:55,424 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.=0A= 2015-02-21 17:54:55,436 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRate = org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Rate of successful kerberos logins and latency (milliseconds)], = about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:55,438 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRate = org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Rate of failed kerberos logins and latency (milliseconds)], = about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:55,438 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRate = org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[GetGroups], about=3D, type=3DDEFAULT, always=3Dfalse, = sampleName=3DOps)=0A= 2015-02-21 17:54:55,439 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: UgiMetrics, User and = group related metrics=0A= 2015-02-21 17:54:55,561 DEBUG [main] org.apache.hadoop.security.Groups: = Creating new Groups object=0A= 2015-02-21 17:54:55,570 DEBUG [main] org.apache.hadoop.security.Groups: = Group mapping = impl=3Dorg.apache.hadoop.security.ShellBasedUnixGroupsMapping; = cacheTimeout=3D300000; warningDeltaMs=3D5000=0A= 2015-02-21 17:54:55,573 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: hadoop login=0A= 2015-02-21 17:54:55,574 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: hadoop login commit=0A= 2015-02-21 17:54:55,578 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: using local = user:UnixPrincipal: yarn=0A= 2015-02-21 17:54:55,601 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: UGI loginUser:yarn = (auth:SIMPLE)=0A= 2015-02-21 17:54:55,603 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:=0A= 2015-02-21 17:54:55,603 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, = Service: , Ident: = (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@36372287)=0A= 2015-02-21 17:54:55,684 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster= (MRAppMaster.java:1474)=0A= 2015-02-21 17:54:55,685 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster entered state INITED=0A= 2015-02-21 17:54:55,715 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred = newApiCommitter.=0A= 2015-02-21 17:54:55,975 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.require.client.cert; Ignoring.=0A= 2015-02-21 17:54:55,982 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.retry.interval; = Ignoring.=0A= 2015-02-21 17:54:55,983 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.client.conf; Ignoring.=0A= 2015-02-21 17:54:55,984 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.keystores.factory.class; Ignoring.=0A= 2015-02-21 17:54:55,990 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.server.conf; Ignoring.=0A= 2015-02-21 17:54:56,009 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.=0A= 2015-02-21 17:54:56,068 DEBUG [main] = org.apache.hadoop.hdfs.BlockReaderLocal: = dfs.client.use.legacy.blockreader.local =3D false=0A= 2015-02-21 17:54:56,068 DEBUG [main] = org.apache.hadoop.hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = =3D false=0A= 2015-02-21 17:54:56,068 DEBUG [main] = org.apache.hadoop.hdfs.BlockReaderLocal: = dfs.client.domain.socket.data.traffic =3D false=0A= 2015-02-21 17:54:56,068 DEBUG [main] = org.apache.hadoop.hdfs.BlockReaderLocal: dfs.domain.socket.path =3D = /var/run/hdfs-sockets/dn=0A= 2015-02-21 17:54:56,096 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = No KeyProvider found.=0A= 2015-02-21 17:54:56,125 DEBUG [main] = org.apache.hadoop.io.retry.RetryUtils: multipleLinearRandomRetry =3D null=0A= 2015-02-21 17:54:56,153 DEBUG [main] org.apache.hadoop.ipc.Server: = rpcKind=3DRPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=3Dclass = org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, = rpcInvoker=3Dorg.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcIn= voker@3996a317=0A= 2015-02-21 17:54:56,160 DEBUG [main] org.apache.hadoop.ipc.Client: = getting client out of cache: org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 17:54:56,374 DEBUG [Finalizer] = org.apache.hadoop.fs.azure.NativeAzureFileSystem: finalize() called.=0A= 2015-02-21 17:54:56,374 DEBUG [Finalizer] = org.apache.hadoop.fs.azure.NativeAzureFileSystem: finalize() called.=0A= 2015-02-21 17:54:56,665 DEBUG [main] = org.apache.hadoop.util.NativeCodeLoader: Trying to load the custom-built = native-hadoop library...=0A= 2015-02-21 17:54:56,665 DEBUG [main] = org.apache.hadoop.util.NativeCodeLoader: Failed to load native-hadoop = with error: java.lang.UnsatisfiedLinkError: no hadoop in = java.library.path=0A= 2015-02-21 17:54:56,665 DEBUG [main] = org.apache.hadoop.util.NativeCodeLoader: = java.library.path=3D/data/yarn/nm/usercache/cloudera/appcache/application= _1424550134651_0001/container_1424550134651_0001_01_000001:/lib/native::/= usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib=0A= 2015-02-21 17:54:56,665 WARN [main] = org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop = library for your platform... using builtin-java classes where applicable=0A= 2015-02-21 17:54:56,713 DEBUG [main] = org.apache.hadoop.util.PerformanceAdvisory: Both short-circuit local = reads and UNIX domain socket are disabled.=0A= 2015-02-21 17:54:56,717 DEBUG [main] = org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil: = DataTransferProtocol not using SaslPropertiesResolver, no QOP found in = configuration for dfs.data.transfer.protection=0A= 2015-02-21 17:54:56,734 DEBUG [main] org.apache.hadoop.ipc.Client: The = ping interval is 60000 ms.=0A= 2015-02-21 17:54:56,735 DEBUG [main] org.apache.hadoop.ipc.Client: = Connecting to hadoop0.rdpratti.com/192.168.2.253:8020=0A= 2015-02-21 17:54:56,747 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 17:54:56,836 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 17:54:56,846 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"eucgkiSBnh0/xbl/T69JNapQ1WSuMsxrRhuGk1eX\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 17:54:56,848 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB = info:@org.apache.hadoop.security.token.TokenInfo(value=3Dclass = org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector)=0A= 2015-02-21 17:54:56,848 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Use SIMPLE authentication for = protocol ClientNamenodeProtocolPB=0A= 2015-02-21 17:54:56,848 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 17:54:56,857 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera: starting, having = connections 1=0A= 2015-02-21 17:54:56,862 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #0=0A= 2015-02-21 17:54:56,890 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #0=0A= 2015-02-21 17:54:56,890 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 170ms=0A= 2015-02-21 17:54:56,936 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #1=0A= 2015-02-21 17:54:56,938 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #1=0A= 2015-02-21 17:54:56,938 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 2ms=0A= 2015-02-21 17:54:56,939 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #2=0A= 2015-02-21 17:54:56,940 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #2=0A= 2015-02-21 17:54:56,940 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 2ms=0A= 2015-02-21 17:54:56,940 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #3=0A= 2015-02-21 17:54:56,941 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #3=0A= 2015-02-21 17:54:56,941 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 17:54:56,942 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in = config null=0A= 2015-02-21 17:54:57,055 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is = org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter=0A= 2015-02-21 17:54:57,057 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service Dispatcher=0A= 2015-02-21 17:54:57,069 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.client.MRClientService entered state = INITED=0A= 2015-02-21 17:54:57,071 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service = CommitterEventHandler=0A= 2015-02-21 17:54:57,078 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service = org.apache.hadoop.mapred.TaskAttemptListenerImpl=0A= 2015-02-21 17:54:57,083 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.jobhistory.EventType for class = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler=0A= 2015-02-21 17:54:57,083 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher=0A= 2015-02-21 17:54:57,095 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher=0A= 2015-02-21 17:54:57,096 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher=0A= 2015-02-21 17:54:57,096 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class = org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler=0A= 2015-02-21 17:54:57,097 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher=0A= 2015-02-21 17:54:57,097 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$StagingDirCleaningService=0A= 2015-02-21 17:54:57,097 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter=0A= 2015-02-21 17:54:57,098 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter=0A= 2015-02-21 17:54:57,098 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter=0A= 2015-02-21 17:54:57,099 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType = for class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter=0A= 2015-02-21 17:54:57,099 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service = JobHistoryEventHandler=0A= 2015-02-21 17:54:57,099 DEBUG [main] = org.apache.hadoop.service.CompositeService: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: initing services, = size=3D7=0A= 2015-02-21 17:54:57,099 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: Dispatcher entered = state INITED=0A= 2015-02-21 17:54:57,099 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = CommitterEventHandler entered state INITED=0A= 2015-02-21 17:54:57,100 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapred.TaskAttemptListenerImpl entered state INITED=0A= 2015-02-21 17:54:57,102 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service = TaskHeartbeatHandler=0A= 2015-02-21 17:54:57,102 DEBUG [main] = org.apache.hadoop.service.CompositeService: = org.apache.hadoop.mapred.TaskAttemptListenerImpl: initing services, = size=3D1=0A= 2015-02-21 17:54:57,102 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: TaskHeartbeatHandler = entered state INITED=0A= 2015-02-21 17:54:57,102 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$StagingDirCleaningService = entered state INITED=0A= 2015-02-21 17:54:57,102 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter = entered state INITED=0A= 2015-02-21 17:54:57,102 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter = entered state INITED=0A= 2015-02-21 17:54:57,102 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = JobHistoryEventHandler entered state INITED=0A= 2015-02-21 17:54:57,106 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #4=0A= 2015-02-21 17:54:57,107 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #4=0A= 2015-02-21 17:54:57,107 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 2ms=0A= 2015-02-21 17:54:57,108 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #5=0A= 2015-02-21 17:54:57,109 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #5=0A= 2015-02-21 17:54:57,109 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 17:54:57,110 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #6=0A= 2015-02-21 17:54:57,110 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #6=0A= 2015-02-21 17:54:57,111 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 17:54:57,156 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler=0A= 2015-02-21 17:54:57,373 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: from system property: = null=0A= 2015-02-21 17:54:57,373 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: from environment = variable: null=0A= 2015-02-21 17:54:57,394 DEBUG [main] = org.apache.commons.configuration.ConfigurationUtils: = ConfigurationUtils.locate(): base is null, name is = hadoop-metrics2-mrappmaster.properties=0A= 2015-02-21 17:54:57,396 DEBUG [main] = org.apache.commons.configuration.ConfigurationUtils: = ConfigurationUtils.locate(): base is null, name is = hadoop-metrics2.properties=0A= 2015-02-21 17:54:57,396 DEBUG [main] = org.apache.commons.configuration.ConfigurationUtils: Loading = configuration from the context classpath (hadoop-metrics2.properties)=0A= 2015-02-21 17:54:57,401 INFO [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from = hadoop-metrics2.properties=0A= 2015-02-21 17:54:57,402 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: =0A= 2015-02-21 17:54:57,402 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: =0A= 2015-02-21 17:54:57,404 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: period=0A= 2015-02-21 17:54:57,407 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableStat = org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotStat with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Snapshot, Snapshot stats], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:57,408 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableStat = org.apache.hadoop.metrics2.impl.MetricsSystemImpl.publishStat with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Publish, Publishing stats], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:57,408 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterLong = org.apache.hadoop.metrics2.impl.MetricsSystemImpl.droppedPubAll with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Dropped updates by all sinks], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:57,411 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 17:54:57,412 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 17:54:57,412 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 17:54:57,456 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 17:54:57,456 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D10=0A= 2015-02-21 17:54:57,456 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 17:54:57,456 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of active = metrics sources, name=3DNumActiveSources, type=3Djava.lang.Integer, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of all = registered metrics sources, name=3DNumAllSources, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of active = metrics sinks, name=3DNumActiveSinks, type=3Djava.lang.Integer, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of all = registered metrics sinks, name=3DNumAllSinks, type=3Djava.lang.Integer, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for = snapshot stats, name=3DSnapshotNumOps, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for = snapshot stats, name=3DSnapshotAvgTime, type=3Djava.lang.Double, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for = publishing stats, name=3DPublishNumOps, type=3Djava.lang.Long, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for = publishing stats, name=3DPublishAvgTime, type=3Djava.lang.Double, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DDropped updates by all = sinks, name=3DDroppedPubAll, type=3Djava.lang.Long, read-only, = descriptor=3D{}]]=0A= 2015-02-21 17:54:57,456 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 17:54:57,456 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DMetricsSystem,sub=3DStats=0A= 2015-02-21 17:54:57,456 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = MetricsSystem,sub=3DStats registered.=0A= 2015-02-21 17:54:57,457 INFO [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot = period at 10 second(s).=0A= 2015-02-21 17:54:57,457 INFO [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics = system started=0A= 2015-02-21 17:54:57,457 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 17:54:57,457 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 17:54:57,457 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 17:54:57,459 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 17:54:57,459 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D27=0A= 2015-02-21 17:54:57,459 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 17:54:57,460 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DProcess name, = name=3Dtag.ProcessName, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DSession ID, = name=3Dtag.SessionId, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLocal hostname, = name=3Dtag.Hostname, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNon-heap memory used = in MB, name=3DMemNonHeapUsedM, type=3Djava.lang.Float, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNon-heap memory = committed in MB, name=3DMemNonHeapCommittedM, type=3Djava.lang.Float, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNon-heap memory max in = MB, name=3DMemNonHeapMaxM, type=3Djava.lang.Float, read-only, = descriptor=3D{}], javax.management.MBeanAttributeInfo[description=3DHeap = memory used in MB, name=3DMemHeapUsedM, type=3Djava.lang.Float, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DHeap memory committed = in MB, name=3DMemHeapCommittedM, type=3Djava.lang.Float, read-only, = descriptor=3D{}], javax.management.MBeanAttributeInfo[description=3DHeap = memory max in MB, name=3DMemHeapMaxM, type=3Djava.lang.Float, read-only, = descriptor=3D{}], javax.management.MBeanAttributeInfo[description=3DMax = memory size in MB, name=3DMemMaxM, type=3Djava.lang.Float, read-only, = descriptor=3D{}], javax.management.MBeanAttributeInfo[description=3DGC = Count for PS Scavenge, name=3DGcCountPS Scavenge, type=3Djava.lang.Long, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DGC Time for PS = Scavenge, name=3DGcTimeMillisPS Scavenge, type=3Djava.lang.Long, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DGC Count for PS = MarkSweep, name=3DGcCountPS MarkSweep, type=3Djava.lang.Long, read-only, = descriptor=3D{}], javax.management.MBeanAttributeInfo[description=3DGC = Time for PS MarkSweep, name=3DGcTimeMillisPS MarkSweep, = type=3Djava.lang.Long, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DTotal GC count, = name=3DGcCount, type=3Djava.lang.Long, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DTotal GC time in = milliseconds, name=3DGcTimeMillis, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of new threads, = name=3DThreadsNew, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of runnable = threads, name=3DThreadsRunnable, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of blocked = threads, name=3DThreadsBlocked, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of waiting = threads, name=3DThreadsWaiting, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of timed = waiting threads, name=3DThreadsTimedWaiting, type=3Djava.lang.Integer, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of terminated = threads, name=3DThreadsTerminated, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DTotal number of fatal = log events, name=3DLogFatal, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DTotal number of error = log events, name=3DLogError, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DTotal number of = warning log events, name=3DLogWarn, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DTotal number of info = log events, name=3DLogInfo, type=3Djava.lang.Long, read-only, = descriptor=3D{}]]=0A= 2015-02-21 17:54:57,460 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 17:54:57,460 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DJvmMetrics=0A= 2015-02-21 17:54:57,460 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = JvmMetrics registered.=0A= 2015-02-21 17:54:57,460 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Registered source = JvmMetrics=0A= 2015-02-21 17:54:57,460 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 17:54:57,460 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 17:54:57,460 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 17:54:57,461 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 17:54:57,461 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D20=0A= 2015-02-21 17:54:57,461 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 17:54:57,461 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLocal hostname, = name=3Dtag.Hostname, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DJobsSubmitted, = name=3DJobsSubmitted, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DJobsCompleted, = name=3DJobsCompleted, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DJobsFailed, = name=3DJobsFailed, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DJobsKilled, = name=3DJobsKilled, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DJobsPreparing, = name=3DJobsPreparing, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DJobsRunning, = name=3DJobsRunning, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMapsLaunched, = name=3DMapsLaunched, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMapsCompleted, = name=3DMapsCompleted, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMapsFailed, = name=3DMapsFailed, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMapsKilled, = name=3DMapsKilled, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMapsRunning, = name=3DMapsRunning, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMapsWaiting, = name=3DMapsWaiting, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DReducesLaunched, = name=3DReducesLaunched, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DReducesCompleted, = name=3DReducesCompleted, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DReducesFailed, = name=3DReducesFailed, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DReducesKilled, = name=3DReducesKilled, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DReducesRunning, = name=3DReducesRunning, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DReducesWaiting, = name=3DReducesWaiting, type=3Djava.lang.Integer, read-only, = descriptor=3D{}]]=0A= 2015-02-21 17:54:57,461 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 17:54:57,461 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DMRAppMetrics=0A= 2015-02-21 17:54:57,461 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = MRAppMetrics registered.=0A= 2015-02-21 17:54:57,461 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Registered source = MRAppMetrics=0A= 2015-02-21 17:54:57,462 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 17:54:57,462 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 17:54:57,462 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 17:54:57,462 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 17:54:57,462 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D8=0A= 2015-02-21 17:54:57,462 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 17:54:57,462 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLocal hostname, = name=3Dtag.Hostname, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for rate = of successful kerberos logins and latency (milliseconds), = name=3DLoginSuccessNumOps, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for rate = of successful kerberos logins and latency (milliseconds), = name=3DLoginSuccessAvgTime, type=3Djava.lang.Double, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for rate = of failed kerberos logins and latency (milliseconds), = name=3DLoginFailureNumOps, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for rate = of failed kerberos logins and latency (milliseconds), = name=3DLoginFailureAvgTime, type=3Djava.lang.Double, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for = getGroups, name=3DGetGroupsNumOps, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for = getGroups, name=3DGetGroupsAvgTime, type=3Djava.lang.Double, read-only, = descriptor=3D{}]]=0A= 2015-02-21 17:54:57,462 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 17:54:57,462 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DUgiMetrics=0A= 2015-02-21 17:54:57,462 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = UgiMetrics registered.=0A= 2015-02-21 17:54:57,462 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Registered source = UgiMetrics=0A= 2015-02-21 17:54:57,464 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DMetricsSystem,sub=3DControl=0A= 2015-02-21 17:54:57,465 DEBUG [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0001 of type JOB_INIT=0A= 2015-02-21 17:54:57,465 DEBUG [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: startJobs: = parent=3D/user/cloudera/.staging child=3Djob_1424550134651_0001=0A= 2015-02-21 17:54:57,468 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token = for job_1424550134651_0001 to jobTokenSecretManager=0A= 2015-02-21 17:54:57,481 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #7=0A= 2015-02-21 17:54:57,483 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #7=0A= 2015-02-21 17:54:57,483 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 2ms=0A= 2015-02-21 17:54:57,491 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #8=0A= 2015-02-21 17:54:57,493 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #8=0A= 2015-02-21 17:54:57,493 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getBlockLocations took 2ms=0A= 2015-02-21 17:54:57,534 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = newInfo =3D LocatedBlocks{=0A= fileLength=3D151=0A= underConstruction=3Dfalse=0A= = blocks=3D[LocatedBlock{BP-268700609-192.168.2.253-1419532004456:blk_10737= 54483_13659; getBlockSize()=3D151; corrupt=3Dfalse; offset=3D0; = locs=3D[192.168.2.251:50010]}]=0A= = lastLocatedBlock=3DLocatedBlock{BP-268700609-192.168.2.253-1419532004456:= blk_1073754483_13659; getBlockSize()=3D151; corrupt=3Dfalse; offset=3D0; = locs=3D[192.168.2.251:50010]}=0A= isLastBlockComplete=3Dtrue}=0A= 2015-02-21 17:54:57,538 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = Connecting to datanode 192.168.2.251:50010=0A= 2015-02-21 17:54:57,547 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #9=0A= 2015-02-21 17:54:57,548 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #9=0A= 2015-02-21 17:54:57,548 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getServerDefaults took 1ms=0A= 2015-02-21 17:54:57,554 DEBUG [main] = org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient:= SASL client skipping handshake in unsecured configuration for addr =3D = /192.168.2.251, datanodeId =3D 192.168.2.251:50010=0A= 2015-02-21 17:54:57,605 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing = job_1424550134651_0001 because: not enabled; too much RAM;=0A= 2015-02-21 17:54:57,624 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job = job_1424550134651_0001 =3D 5343207. Number of splits =3D 5=0A= 2015-02-21 17:54:57,626 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces = for job job_1424550134651_0001 =3D 1=0A= 2015-02-21 17:54:57,626 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: = job_1424550134651_0001Job Transitioned from NEW to INITED=0A= 2015-02-21 17:54:57,627 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching = normal, non-uberized, multi-container job job_1424550134651_0001.=0A= 2015-02-21 17:54:57,628 DEBUG [main] org.apache.hadoop.yarn.ipc.YarnRPC: = Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC=0A= 2015-02-21 17:54:57,629 DEBUG [main] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc server for protocol interface = org.apache.hadoop.mapreduce.v2.api.MRClientProtocol with 1 handlers=0A= 2015-02-21 17:54:57,686 INFO [main] = org.apache.hadoop.ipc.CallQueueManager: Using callQueue class = java.util.concurrent.LinkedBlockingQueue=0A= 2015-02-21 17:54:57,686 DEBUG [main] org.apache.hadoop.ipc.Server: TOKEN = authentication enabled for secret manager=0A= 2015-02-21 17:54:57,686 DEBUG [main] org.apache.hadoop.ipc.Server: = Server accepts auth methods:[TOKEN, SIMPLE]=0A= 2015-02-21 17:54:57,694 INFO [Socket Reader #1 for port 43574] = org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 43574=0A= 2015-02-21 17:54:57,697 DEBUG [main] = org.apache.hadoop.ipc.metrics.RpcMetrics: Initialized = MetricsRegistry{info=3DMetricsInfoImpl{name=3Drpc, description=3Drpc}, = tags=3D[MetricsTag{info=3DMetricsInfoImpl{name=3Dport, description=3DRPC = port}, value=3D43574}], metrics=3D[]}=0A= 2015-02-21 17:54:57,698 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterLong = org.apache.hadoop.ipc.metrics.RpcMetrics.receivedBytes with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of received bytes], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:57,698 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterLong = org.apache.hadoop.ipc.metrics.RpcMetrics.sentBytes with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of sent bytes], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:57,698 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRate = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcQueueTime with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Queue time], about=3D, type=3DDEFAULT, always=3Dfalse, = sampleName=3DOps)=0A= 2015-02-21 17:54:57,698 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRate = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcProcessingTime with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Processsing time], about=3D, type=3DDEFAULT, always=3Dfalse, = sampleName=3DOps)=0A= 2015-02-21 17:54:57,699 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthenticationFailures with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authentication failures], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:57,699 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthenticationSuccesses with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authentication successes], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:57,699 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthorizationFailures with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authorization failures], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:57,699 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthorizationSuccesses with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authorization sucesses], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:57,700 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: method public int = org.apache.hadoop.ipc.metrics.RpcMetrics.numOpenConnections() with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of open connections], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:57,702 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: method public int = org.apache.hadoop.ipc.metrics.RpcMetrics.callQueueLength() with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Length of the call queue], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:57,702 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = RpcActivityForPort43574, Aggregate RPC metrics=0A= 2015-02-21 17:54:57,702 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 17:54:57,702 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 17:54:57,702 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 17:54:57,702 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 17:54:57,703 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D15=0A= 2015-02-21 17:54:57,703 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 17:54:57,703 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DRPC port, = name=3Dtag.port, type=3Djava.lang.String, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLocal hostname, = name=3Dtag.Hostname, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of received = bytes, name=3DReceivedBytes, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of sent bytes, = name=3DSentBytes, type=3Djava.lang.Long, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for = queue time, name=3DRpcQueueTimeNumOps, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for queue = time, name=3DRpcQueueTimeAvgTime, type=3Djava.lang.Double, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for = processsing time, name=3DRpcProcessingTimeNumOps, type=3Djava.lang.Long, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for = processsing time, name=3DRpcProcessingTimeAvgTime, = type=3Djava.lang.Double, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authentication failures, name=3DRpcAuthenticationFailures, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authentication successes, name=3DRpcAuthenticationSuccesses, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authorization failures, name=3DRpcAuthorizationFailures, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authorization sucesses, name=3DRpcAuthorizationSuccesses, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of open = connections, name=3DNumOpenConnections, type=3Djava.lang.Integer, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLength of the call = queue, name=3DCallQueueLength, type=3Djava.lang.Integer, read-only, = descriptor=3D{}]]=0A= 2015-02-21 17:54:57,703 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 17:54:57,703 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DRpcActivityForPort43574=0A= 2015-02-21 17:54:57,703 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = RpcActivityForPort43574 registered.=0A= 2015-02-21 17:54:57,703 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Registered source = RpcActivityForPort43574=0A= 2015-02-21 17:54:57,704 DEBUG [main] = org.apache.hadoop.ipc.metrics.RpcDetailedMetrics: = MetricsInfoImpl{name=3Drpcdetailed, description=3Drpcdetailed}=0A= 2015-02-21 17:54:57,704 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRates = org.apache.hadoop.ipc.metrics.RpcDetailedMetrics.rates with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:57,709 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = RpcDetailedActivityForPort43574, Per method RPC metrics=0A= 2015-02-21 17:54:57,709 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 17:54:57,709 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 17:54:57,709 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 17:54:57,709 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 17:54:57,709 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D3=0A= 2015-02-21 17:54:57,709 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 17:54:57,709 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DRPC port, = name=3Dtag.port, type=3Djava.lang.String, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLocal hostname, = name=3Dtag.Hostname, type=3Djava.lang.String, read-only, = descriptor=3D{}]]=0A= 2015-02-21 17:54:57,709 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 17:54:57,709 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DRpcDetailedActivityForPort43574=0A= 2015-02-21 17:54:57,709 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = RpcDetailedActivityForPort43574 registered.=0A= 2015-02-21 17:54:57,710 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Registered source = RpcDetailedActivityForPort43574=0A= 2015-02-21 17:54:57,718 DEBUG [main] org.apache.hadoop.ipc.Server: = RpcKind =3D RPC_PROTOCOL_BUFFER Protocol Name =3D = org.apache.hadoop.ipc.ProtocolMetaInfoPB version=3D1 = ProtocolImpl=3Dorg.apache.hadoop.ipc.protobuf.ProtocolInfoProtos$Protocol= InfoService$2 protocolClass=3Dorg.apache.hadoop.ipc.ProtocolMetaInfoPB=0A= 2015-02-21 17:54:57,718 DEBUG [main] org.apache.hadoop.ipc.Server: = RpcKind =3D RPC_PROTOCOL_BUFFER Protocol Name =3D = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB version=3D1 = ProtocolImpl=3Dorg.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProt= ocolService$2 = protocolClass=3Dorg.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB=0A= 2015-02-21 17:54:57,718 INFO [main] = org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding = protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the = server=0A= 2015-02-21 17:54:57,718 DEBUG [main] org.apache.hadoop.ipc.Server: = RpcKind =3D RPC_PROTOCOL_BUFFER Protocol Name =3D = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB version=3D1 = ProtocolImpl=3Dorg.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProt= ocolService$2 = protocolClass=3Dorg.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB=0A= 2015-02-21 17:54:57,718 INFO [IPC Server Responder] = org.apache.hadoop.ipc.Server: IPC Server Responder: starting=0A= 2015-02-21 17:54:57,719 DEBUG [IPC Server handler 0 on 43574] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 43574: starting=0A= 2015-02-21 17:54:57,719 INFO [main] = org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated = MRClientService at hadoop0.rdpratti.com/192.168.2.253:43574=0A= 2015-02-21 17:54:57,724 INFO [IPC Server listener on 43574] = org.apache.hadoop.ipc.Server: IPC Server listener on 43574: starting=0A= 2015-02-21 17:54:57,797 INFO [main] org.mortbay.log: Logging to = org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via = org.mortbay.log.Slf4jLog=0A= 2015-02-21 17:54:57,798 DEBUG [main] org.mortbay.log: = filterNameMap=3D{NoCacheFilter=3DNoCacheFilter}=0A= 2015-02-21 17:54:57,798 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15)]=0A= 2015-02-21 17:54:57,798 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,798 DEBUG [main] org.mortbay.log: = servletPathMap=3Dnull=0A= 2015-02-21 17:54:57,798 DEBUG [main] org.mortbay.log: = servletNameMap=3Dnull=0A= 2015-02-21 17:54:57,800 DEBUG [main] org.mortbay.log: Container = Server@541821e6 + org.mortbay.thread.QueuedThreadPool@458ba94d as = threadpool=0A= 2015-02-21 17:54:57,803 INFO [main] = org.apache.hadoop.http.HttpRequestLog: Http request log for = http.requests.mapreduce is not defined=0A= 2015-02-21 17:54:57,803 DEBUG [main] org.mortbay.log: Container = Server@541821e6 + ContextHandlerCollection@ffaf13d as handler=0A= 2015-02-21 17:54:57,804 DEBUG [main] org.mortbay.log: Container = ContextHandlerCollection@ffaf13d + = org.mortbay.jetty.webapp.WebAppContext@23f3e3fd{/,jar:file:/opt/cloudera/= parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.0= .jar!/webapps/mapreduce} as handler=0A= 2015-02-21 17:54:57,804 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + NoCacheFilter as filter=0A= 2015-02-21 17:54:57,804 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (F=3DNoCacheFilter,[/*],[],15) as filterMapping=0A= 2015-02-21 17:54:57,804 DEBUG [main] org.mortbay.log: Container = SecurityHandler@4799bfc + ServletHandler@60fd097b as handler=0A= 2015-02-21 17:54:57,804 DEBUG [main] org.mortbay.log: Container = SessionHandler@4befbfaf + SecurityHandler@4799bfc as handler=0A= 2015-02-21 17:54:57,804 DEBUG [main] org.mortbay.log: Container = SessionHandler@4befbfaf + = org.mortbay.jetty.servlet.HashSessionManager@6911a11b as sessionManager=0A= 2015-02-21 17:54:57,804 DEBUG [main] org.mortbay.log: Container = org.mortbay.jetty.webapp.WebAppContext@23f3e3fd{/,jar:file:/opt/cloudera/= parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.0= .jar!/webapps/mapreduce} + SessionHandler@4befbfaf as handler=0A= 2015-02-21 17:54:57,804 DEBUG [main] org.mortbay.log: Container = org.mortbay.jetty.webapp.WebAppContext@23f3e3fd{/,jar:file:/opt/cloudera/= parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.0= .jar!/webapps/mapreduce} + ErrorPageErrorHandler@4682981 as error=0A= 2015-02-21 17:54:57,805 DEBUG [main] org.mortbay.log: Container = ContextHandlerCollection@ffaf13d + = org.mortbay.jetty.servlet.Context@527cd669{/static,null} as handler=0A= 2015-02-21 17:54:57,805 DEBUG [main] org.mortbay.log: Container = org.mortbay.jetty.servlet.Context@527cd669{/static,null} + = ServletHandler@1e0b1ce as handler=0A= 2015-02-21 17:54:57,815 DEBUG [main] org.mortbay.log: Container = ServletHandler@1e0b1ce + = org.mortbay.jetty.servlet.DefaultServlet-1184676877 as servlet=0A= 2015-02-21 17:54:57,816 DEBUG [main] org.mortbay.log: Container = ServletHandler@1e0b1ce + = (S=3Dorg.mortbay.jetty.servlet.DefaultServlet-1184676877,[/*]) as = servletMapping=0A= 2015-02-21 17:54:57,817 DEBUG [main] org.mortbay.log: = filterNameMap=3Dnull=0A= 2015-02-21 17:54:57,817 DEBUG [main] org.mortbay.log: pathFilters=3Dnull=0A= 2015-02-21 17:54:57,817 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,817 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/*=3Dorg.mortbay.jetty.servlet.DefaultServlet-118467687= 7}=0A= 2015-02-21 17:54:57,817 DEBUG [main] org.mortbay.log: = servletNameMap=3D{org.mortbay.jetty.servlet.DefaultServlet-1184676877=3Do= rg.mortbay.jetty.servlet.DefaultServlet-1184676877}=0A= 2015-02-21 17:54:57,817 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + safety as filter=0A= 2015-02-21 17:54:57,817 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (F=3Dsafety,[/*],[],15) as filterMapping=0A= 2015-02-21 17:54:57,817 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter}=0A= 2015-02-21 17:54:57,818 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15)]=0A= 2015-02-21 17:54:57,818 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,818 DEBUG [main] org.mortbay.log: = servletPathMap=3Dnull=0A= 2015-02-21 17:54:57,818 DEBUG [main] org.mortbay.log: = servletNameMap=3Dnull=0A= 2015-02-21 17:54:57,818 DEBUG [main] org.mortbay.log: Container = ServletHandler@1e0b1ce + safety as filter=0A= 2015-02-21 17:54:57,818 DEBUG [main] org.mortbay.log: Container = ServletHandler@1e0b1ce + (F=3Dsafety,[/*],[],15) as filterMapping=0A= 2015-02-21 17:54:57,818 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety}=0A= 2015-02-21 17:54:57,818 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3Dsafety,[/*],[],15)]=0A= 2015-02-21 17:54:57,818 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,818 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/*=3Dorg.mortbay.jetty.servlet.DefaultServlet-118467687= 7}=0A= 2015-02-21 17:54:57,818 DEBUG [main] org.mortbay.log: = servletNameMap=3D{org.mortbay.jetty.servlet.DefaultServlet-1184676877=3Do= rg.mortbay.jetty.servlet.DefaultServlet-1184676877}=0A= 2015-02-21 17:54:57,818 INFO [main] org.apache.hadoop.http.HttpServer2: = Added global filter 'safety' = (class=3Dorg.apache.hadoop.http.HttpServer2$QuotingInputFilter)=0A= 2015-02-21 17:54:57,825 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + AM_PROXY_FILTER as filter=0A= 2015-02-21 17:54:57,825 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15) as = filterMapping=0A= 2015-02-21 17:54:57,825 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:57,825 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15)]=0A= 2015-02-21 17:54:57,825 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,825 DEBUG [main] org.mortbay.log: = servletPathMap=3Dnull=0A= 2015-02-21 17:54:57,825 DEBUG [main] org.mortbay.log: = servletNameMap=3Dnull=0A= 2015-02-21 17:54:57,825 INFO [main] org.apache.hadoop.http.HttpServer2: = Added filter AM_PROXY_FILTER = (class=3Dorg.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to = context mapreduce=0A= 2015-02-21 17:54:57,825 DEBUG [main] org.mortbay.log: Container = ServletHandler@1e0b1ce + AM_PROXY_FILTER as filter=0A= 2015-02-21 17:54:57,825 DEBUG [main] org.mortbay.log: Container = ServletHandler@1e0b1ce + (F=3DAM_PROXY_FILTER,[/*],[],15) as = filterMapping=0A= 2015-02-21 17:54:57,825 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:57,825 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3Dsafety,[/*],[],15), (F=3DAM_PROXY_FILTER,[/*],[],15)]=0A= 2015-02-21 17:54:57,825 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,825 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/*=3Dorg.mortbay.jetty.servlet.DefaultServlet-118467687= 7}=0A= 2015-02-21 17:54:57,825 DEBUG [main] org.mortbay.log: = servletNameMap=3D{org.mortbay.jetty.servlet.DefaultServlet-1184676877=3Do= rg.mortbay.jetty.servlet.DefaultServlet-1184676877}=0A= 2015-02-21 17:54:57,825 INFO [main] org.apache.hadoop.http.HttpServer2: = Added filter AM_PROXY_FILTER = (class=3Dorg.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to = context static=0A= 2015-02-21 17:54:57,826 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + stacks as servlet=0A= 2015-02-21 17:54:57,826 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (S=3Dstacks,[/stacks]) as servletMapping=0A= 2015-02-21 17:54:57,826 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:57,826 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15)]=0A= 2015-02-21 17:54:57,826 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,827 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/stacks=3Dstacks}=0A= 2015-02-21 17:54:57,827 DEBUG [main] org.mortbay.log: = servletNameMap=3D{stacks=3Dstacks}=0A= 2015-02-21 17:54:57,827 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (F=3DAM_PROXY_FILTER,[/stacks],[],15) as = filterMapping=0A= 2015-02-21 17:54:57,827 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:57,827 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15)]=0A= 2015-02-21 17:54:57,827 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,827 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/stacks=3Dstacks}=0A= 2015-02-21 17:54:57,827 DEBUG [main] org.mortbay.log: = servletNameMap=3D{stacks=3Dstacks}=0A= 2015-02-21 17:54:57,828 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + logLevel as servlet=0A= 2015-02-21 17:54:57,828 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (S=3DlogLevel,[/logLevel]) as servletMapping=0A= 2015-02-21 17:54:57,828 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:57,828 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15)]=0A= 2015-02-21 17:54:57,828 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,828 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 17:54:57,828 DEBUG [main] org.mortbay.log: = servletNameMap=3D{logLevel=3DlogLevel, stacks=3Dstacks}=0A= 2015-02-21 17:54:57,828 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (F=3DAM_PROXY_FILTER,[/logLevel],[],15) as = filterMapping=0A= 2015-02-21 17:54:57,828 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:57,828 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15)]=0A= 2015-02-21 17:54:57,828 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,828 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 17:54:57,828 DEBUG [main] org.mortbay.log: = servletNameMap=3D{logLevel=3DlogLevel, stacks=3Dstacks}=0A= 2015-02-21 17:54:57,829 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + metrics as servlet=0A= 2015-02-21 17:54:57,829 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (S=3Dmetrics,[/metrics]) as servletMapping=0A= 2015-02-21 17:54:57,829 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:57,829 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15)]=0A= 2015-02-21 17:54:57,829 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,829 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /stacks=3Dstacks, = /logLevel=3DlogLevel}=0A= 2015-02-21 17:54:57,829 DEBUG [main] org.mortbay.log: = servletNameMap=3D{metrics=3Dmetrics, logLevel=3DlogLevel, = stacks=3Dstacks}=0A= 2015-02-21 17:54:57,830 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (F=3DAM_PROXY_FILTER,[/metrics],[],15) as = filterMapping=0A= 2015-02-21 17:54:57,830 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:57,830 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15)]=0A= 2015-02-21 17:54:57,830 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,830 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /stacks=3Dstacks, = /logLevel=3DlogLevel}=0A= 2015-02-21 17:54:57,830 DEBUG [main] org.mortbay.log: = servletNameMap=3D{metrics=3Dmetrics, logLevel=3DlogLevel, = stacks=3Dstacks}=0A= 2015-02-21 17:54:57,831 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + jmx as servlet=0A= 2015-02-21 17:54:57,831 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (S=3Djmx,[/jmx]) as servletMapping=0A= 2015-02-21 17:54:57,831 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:57,831 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15)]=0A= 2015-02-21 17:54:57,831 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,831 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /jmx=3Djmx, /stacks=3Dstacks, = /logLevel=3DlogLevel}=0A= 2015-02-21 17:54:57,831 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = stacks=3Dstacks}=0A= 2015-02-21 17:54:57,831 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (F=3DAM_PROXY_FILTER,[/jmx],[],15) as = filterMapping=0A= 2015-02-21 17:54:57,831 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:57,831 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15)]=0A= 2015-02-21 17:54:57,831 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,831 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /jmx=3Djmx, /stacks=3Dstacks, = /logLevel=3DlogLevel}=0A= 2015-02-21 17:54:57,832 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = stacks=3Dstacks}=0A= 2015-02-21 17:54:57,832 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + conf as servlet=0A= 2015-02-21 17:54:57,832 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (S=3Dconf,[/conf]) as servletMapping=0A= 2015-02-21 17:54:57,832 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:57,832 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15)]=0A= 2015-02-21 17:54:57,832 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,833 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /conf=3Dconf, /jmx=3Djmx, = /stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 17:54:57,833 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 17:54:57,833 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (F=3DAM_PROXY_FILTER,[/conf],[],15) as = filterMapping=0A= 2015-02-21 17:54:57,833 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:57,833 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15), (F=3DAM_PROXY_FILTER,[/conf],[],15)]=0A= 2015-02-21 17:54:57,833 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,833 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /conf=3Dconf, /jmx=3Djmx, = /stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 17:54:57,833 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 17:54:57,833 INFO [main] org.apache.hadoop.http.HttpServer2: = adding path spec: /mapreduce/*=0A= 2015-02-21 17:54:57,833 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (F=3DAM_PROXY_FILTER,[/mapreduce/*],[],15) as = filterMapping=0A= 2015-02-21 17:54:57,833 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:57,833 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15), (F=3DAM_PROXY_FILTER,[/conf],[],15), = (F=3DAM_PROXY_FILTER,[/mapreduce/*],[],15)]=0A= 2015-02-21 17:54:57,833 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,833 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /conf=3Dconf, /jmx=3Djmx, = /stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 17:54:57,833 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 17:54:57,834 INFO [main] org.apache.hadoop.http.HttpServer2: = adding path spec: /ws/*=0A= 2015-02-21 17:54:57,834 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (F=3DAM_PROXY_FILTER,[/ws/*],[],15) as = filterMapping=0A= 2015-02-21 17:54:57,834 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:57,834 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15), (F=3DAM_PROXY_FILTER,[/conf],[],15), = (F=3DAM_PROXY_FILTER,[/mapreduce/*],[],15), = (F=3DAM_PROXY_FILTER,[/ws/*],[],15)]=0A= 2015-02-21 17:54:57,834 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,834 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /conf=3Dconf, /jmx=3Djmx, = /stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 17:54:57,834 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 17:54:57,845 DEBUG [main] org.mortbay.log: Container = Server@541821e6 + = HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:0 as connector=0A= 2015-02-21 17:54:57,846 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + guice as filter=0A= 2015-02-21 17:54:57,846 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (F=3Dguice,[/*],[],15) as filterMapping=0A= 2015-02-21 17:54:57,846 DEBUG [main] org.mortbay.log: = filterNameMap=3D{guice=3Dguice, safety=3Dsafety, = NoCacheFilter=3DNoCacheFilter, AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:57,846 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15), (F=3DAM_PROXY_FILTER,[/conf],[],15), = (F=3DAM_PROXY_FILTER,[/mapreduce/*],[],15), = (F=3DAM_PROXY_FILTER,[/ws/*],[],15), (F=3Dguice,[/*],[],15)]=0A= 2015-02-21 17:54:57,846 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:57,846 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /conf=3Dconf, /jmx=3Djmx, = /stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 17:54:57,846 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 17:54:57,847 INFO [main] org.apache.hadoop.http.HttpServer2: = Jetty bound to port 40686=0A= 2015-02-21 17:54:57,847 INFO [main] org.mortbay.log: = jetty-6.1.26.cloudera.4=0A= 2015-02-21 17:54:57,858 DEBUG [main] org.mortbay.log: started = org.mortbay.thread.QueuedThreadPool@458ba94d=0A= 2015-02-21 17:54:57,878 DEBUG [main] org.mortbay.log: Thread Context = class loader is: ContextLoader@mapreduce([]) / = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 17:54:57,878 DEBUG [main] org.mortbay.log: Parent class = loader is: sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 17:54:57,878 DEBUG [main] org.mortbay.log: Parent class = loader is: sun.misc.Launcher$ExtClassLoader@21a722ef=0A= 2015-02-21 17:54:57,878 DEBUG [main] org.mortbay.log: Try = webapp=3Djar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/h= adoop-yarn-common-2.5.0-cdh5.3.0.jar!/webapps/mapreduce, exists=3Dtrue, = directory=3Dtrue=0A= 2015-02-21 17:54:57,880 DEBUG [main] org.mortbay.log: Created temp dir = /tmp/Jetty_0_0_0_0_40686_mapreduce____40v6rl for = org.mortbay.jetty.webapp.WebAppContext@23f3e3fd{/,jar:file:/opt/cloudera/= parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.0= .jar!/webapps/mapreduce}=0A= 2015-02-21 17:54:57,880 INFO [main] org.mortbay.log: Extract = jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yar= n-common-2.5.0-cdh5.3.0.jar!/webapps/mapreduce to = /tmp/Jetty_0_0_0_0_40686_mapreduce____40v6rl/webapp=0A= 2015-02-21 17:54:57,880 DEBUG [main] org.mortbay.log: Extract = jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yar= n-common-2.5.0-cdh5.3.0.jar!/webapps/mapreduce to = /tmp/Jetty_0_0_0_0_40686_mapreduce____40v6rl/webapp=0A= 2015-02-21 17:54:57,880 DEBUG [main] org.mortbay.log: Extracting entry = =3D webapps/mapreduce from jar = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-co= mmon-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:57,881 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/services/=0A= 2015-02-21 17:54:57,881 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/=0A= 2015-02-21 17:54:57,881 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/applicationhistory/=0A= 2015-02-21 17:54:57,881 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/cluster/=0A= 2015-02-21 17:54:57,881 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/jobhistory/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/node/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/proxy/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/css/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/js/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jt/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/test/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/yarn/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = org/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/=0A= 2015-02-21 17:54:57,882 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/log/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/util/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/admin/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/service/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/client/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/=0A= 2015-02-21 17:54:57,883 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/impl/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/impl/pb/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/impl/pb/client/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/impl/pb/service/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/security/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/ipc/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/timeline/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/resource/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/impl/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/impl/pb/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factory/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factory/providers/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/=0A= 2015-02-21 17:54:57,884 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/services/org.apache.hadoop.security.SecurityInfo=0A= 2015-02-21 17:54:57,885 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/services/org.apache.hadoop.security.token.TokenIdentifier=0A= 2015-02-21 17:54:57,885 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/services/org.apache.hadoop.security.token.TokenRenewer=0A= 2015-02-21 17:54:57,885 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/NOTICE=0A= 2015-02-21 17:54:57,885 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/LICENSE=0A= 2015-02-21 17:54:57,885 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/DEPENDENCIES=0A= 2015-02-21 17:54:57,885 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/applicationhistory/.keep=0A= 2015-02-21 17:54:57,885 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/cluster/.keep=0A= 2015-02-21 17:54:57,885 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/jobhistory/.keep=0A= 2015-02-21 17:54:57,889 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/node/.keep=0A= 2015-02-21 17:54:57,889 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/proxy/.keep=0A= 2015-02-21 17:54:57,889 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/busy.gif=0A= 2015-02-21 17:54:57,889 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/css/demo_page.css=0A= 2015-02-21 17:54:57,889 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/css/demo_table.css=0A= 2015-02-21 17:54:57,890 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/css/jui-dt.css=0A= 2015-02-21 17:54:57,890 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/Sorting icons.psd=0A= 2015-02-21 17:54:57,890 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/back_disabled.jpg=0A= 2015-02-21 17:54:57,890 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/back_enabled.jpg=0A= 2015-02-21 17:54:57,890 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/favicon.ico=0A= 2015-02-21 17:54:57,890 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/forward_disabled.jpg=0A= 2015-02-21 17:54:57,890 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/forward_enabled.jpg=0A= 2015-02-21 17:54:57,891 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/sort_asc.png=0A= 2015-02-21 17:54:57,891 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/sort_asc_disabled.png=0A= 2015-02-21 17:54:57,891 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/sort_both.png=0A= 2015-02-21 17:54:57,891 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/sort_desc.png=0A= 2015-02-21 17:54:57,891 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/sort_desc_disabled.png=0A= 2015-02-21 17:54:57,891 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/js/jquery.dataTables.min.js.gz=0A= 2015-02-21 17:54:57,891 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/hadoop-st.png=0A= 2015-02-21 17:54:57,892 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/jquery-1.8.2.min.js.gz=0A= 2015-02-21 17:54:57,892 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/jquery-ui-1.9.1.custom.min.js.gz=0A= 2015-02-21 17:54:57,893 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_flat_0_aaaaaa_40x100= .png=0A= 2015-02-21 17:54:57,893 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_flat_75_ffffff_40x10= 0.png=0A= 2015-02-21 17:54:57,893 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_glass_55_fbf9ee_1x40= 0.png=0A= 2015-02-21 17:54:57,893 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_glass_65_ffffff_1x40= 0.png=0A= 2015-02-21 17:54:57,893 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_glass_75_dadada_1x40= 0.png=0A= 2015-02-21 17:54:57,893 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_glass_75_e6e6e6_1x40= 0.png=0A= 2015-02-21 17:54:57,893 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_glass_95_fef1ec_1x40= 0.png=0A= 2015-02-21 17:54:57,893 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_highlight-soft_75_cc= cccc_1x100.png=0A= 2015-02-21 17:54:57,893 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-icons_222222_256x240.pn= g=0A= 2015-02-21 17:54:57,894 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-icons_2e83ff_256x240.pn= g=0A= 2015-02-21 17:54:57,894 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-icons_454545_256x240.pn= g=0A= 2015-02-21 17:54:57,894 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-icons_888888_256x240.pn= g=0A= 2015-02-21 17:54:57,894 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-icons_cd0a0a_256x240.pn= g=0A= 2015-02-21 17:54:57,894 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/jquery-ui.css=0A= 2015-02-21 17:54:57,895 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jt/jquery.jstree.js.gz=0A= 2015-02-21 17:54:57,895 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/yarn.css=0A= 2015-02-21 17:54:57,895 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/yarn.dt.plugins.js=0A= 2015-02-21 17:54:57,896 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/test/.keep=0A= 2015-02-21 17:54:57,896 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/yarn/.keep=0A= 2015-02-21 17:54:57,896 DEBUG [main] org.mortbay.log: Skipping entry: = yarn-default.xml=0A= 2015-02-21 17:54:57,896 DEBUG [main] org.mortbay.log: Skipping entry: = yarn-version-info.properties=0A= 2015-02-21 17:54:57,897 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/package-info.class=0A= 2015-02-21 17:54:57,897 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/package-info.class=0A= 2015-02-21 17:54:57,897 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/TextPage.class=0A= 2015-02-21 17:54:57,897 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/TextView.class=0A= 2015-02-21 17:54:57,897 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/HtmlPage$_.class=0A= 2015-02-21 17:54:57,897 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/HtmlPage$Page.class=0A= 2015-02-21 17:54:57,897 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/HtmlPage.class=0A= 2015-02-21 17:54:57,897 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/LipsumBlock.class=0A= 2015-02-21 17:54:57,897 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/HtmlBlock$Block.class=0A= 2015-02-21 17:54:57,897 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/HtmlBlock.class=0A= 2015-02-21 17:54:57,897 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/TwoColumnLayout.class=0A= 2015-02-21 17:54:57,898 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/JQueryUI.class=0A= 2015-02-21 17:54:57,898 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/InfoBlock.class=0A= 2015-02-21 17:54:57,898 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/TwoColumnCssLayout.class=0A= 2015-02-21 17:54:57,898 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/ErrorPage.class=0A= 2015-02-21 17:54:57,898 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/FooterBlock.class=0A= 2015-02-21 17:54:57,898 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/Html.class=0A= 2015-02-21 17:54:57,898 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/HeaderBlock.class=0A= 2015-02-21 17:54:57,899 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/NavBlock.class=0A= 2015-02-21 17:54:57,899 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/DefaultPage.class=0A= 2015-02-21 17:54:57,899 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/package-info.class=0A= 2015-02-21 17:54:57,899 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletImpl$EOpt.class=0A= 2015-02-21 17:54:57,899 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletImpl$EImp.class=0A= 2015-02-21 17:54:57,899 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletImpl$Generic.class=0A= 2015-02-21 17:54:57,899 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletImpl.class=0A= 2015-02-21 17:54:57,899 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Shape.class=0A= 2015-02-21 17:54:57,899 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Dir.class=0A= 2015-02-21 17:54:57,900 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Media.class=0A= 2015-02-21 17:54:57,900 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$LinkType.class=0A= 2015-02-21 17:54:57,900 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Method.class=0A= 2015-02-21 17:54:57,900 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$InputType.class=0A= 2015-02-21 17:54:57,900 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$ButtonType.class=0A= 2015-02-21 17:54:57,900 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Scope.class=0A= 2015-02-21 17:54:57,900 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Element.class=0A= 2015-02-21 17:54:57,900 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_.class=0A= 2015-02-21 17:54:57,900 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Child.class=0A= 2015-02-21 17:54:57,900 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Script.class=0A= 2015-02-21 17:54:57,900 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Object.class=0A= 2015-02-21 17:54:57,900 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$HeadMisc.class=0A= 2015-02-21 17:54:57,901 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Heading.class=0A= 2015-02-21 17:54:57,901 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Listing.class=0A= 2015-02-21 17:54:57,901 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Preformatted.class=0A= 2015-02-21 17:54:57,901 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$CoreAttrs.class=0A= 2015-02-21 17:54:57,901 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$I18nAttrs.class=0A= 2015-02-21 17:54:57,901 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$EventsAttrs.class=0A= 2015-02-21 17:54:57,901 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Attrs.class=0A= 2015-02-21 17:54:57,901 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_FontSize.class=0A= 2015-02-21 17:54:57,901 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_FontStyle.class=0A= 2015-02-21 17:54:57,901 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$FontStyle.class=0A= 2015-02-21 17:54:57,901 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Phrase.class=0A= 2015-02-21 17:54:57,901 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_ImgObject.class=0A= 2015-02-21 17:54:57,901 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_SubSup.class=0A= 2015-02-21 17:54:57,901 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Anchor.class=0A= 2015-02-21 17:54:57,901 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_InsDel.class=0A= 2015-02-21 17:54:57,902 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Special.class=0A= 2015-02-21 17:54:57,902 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Special.class=0A= 2015-02-21 17:54:57,902 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Label.class=0A= 2015-02-21 17:54:57,902 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_FormCtrl.class=0A= 2015-02-21 17:54:57,902 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$FormCtrl.class=0A= 2015-02-21 17:54:57,902 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Content.class=0A= 2015-02-21 17:54:57,902 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_RawContent.class=0A= 2015-02-21 17:54:57,902 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$PCData.class=0A= 2015-02-21 17:54:57,902 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Inline.class=0A= 2015-02-21 17:54:57,902 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$I.class=0A= 2015-02-21 17:54:57,902 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$B.class=0A= 2015-02-21 17:54:57,902 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$SMALL.class=0A= 2015-02-21 17:54:57,902 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$EM.class=0A= 2015-02-21 17:54:57,903 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$STRONG.class=0A= 2015-02-21 17:54:57,903 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$DFN.class=0A= 2015-02-21 17:54:57,903 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$CODE.class=0A= 2015-02-21 17:54:57,903 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$SAMP.class=0A= 2015-02-21 17:54:57,903 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$KBD.class=0A= 2015-02-21 17:54:57,903 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$VAR.class=0A= 2015-02-21 17:54:57,903 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$CITE.class=0A= 2015-02-21 17:54:57,903 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$ABBR.class=0A= 2015-02-21 17:54:57,903 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$ACRONYM.class=0A= 2015-02-21 17:54:57,903 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$SUB.class=0A= 2015-02-21 17:54:57,903 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$SUP.class=0A= 2015-02-21 17:54:57,903 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$SPAN.class=0A= 2015-02-21 17:54:57,903 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$BDO.class=0A= 2015-02-21 17:54:57,903 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$BR.class=0A= 2015-02-21 17:54:57,903 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Form.class=0A= 2015-02-21 17:54:57,904 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_FieldSet.class=0A= 2015-02-21 17:54:57,904 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Block.class=0A= 2015-02-21 17:54:57,904 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Block.class=0A= 2015-02-21 17:54:57,904 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Flow.class=0A= 2015-02-21 17:54:57,904 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Body.class=0A= 2015-02-21 17:54:57,904 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$BODY.class=0A= 2015-02-21 17:54:57,904 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$ADDRESS.class=0A= 2015-02-21 17:54:57,904 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$DIV.class=0A= 2015-02-21 17:54:57,904 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$A.class=0A= 2015-02-21 17:54:57,904 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$MAP.class=0A= 2015-02-21 17:54:57,904 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$AREA.class=0A= 2015-02-21 17:54:57,904 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$LINK.class=0A= 2015-02-21 17:54:57,904 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$IMG.class=0A= 2015-02-21 17:54:57,905 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Param.class=0A= 2015-02-21 17:54:57,905 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$OBJECT.class=0A= 2015-02-21 17:54:57,905 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$PARAM.class=0A= 2015-02-21 17:54:57,905 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$HR.class=0A= 2015-02-21 17:54:57,905 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$P.class=0A= 2015-02-21 17:54:57,905 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$H1.class=0A= 2015-02-21 17:54:57,905 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$H2.class=0A= 2015-02-21 17:54:57,905 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$H3.class=0A= 2015-02-21 17:54:57,905 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$H4.class=0A= 2015-02-21 17:54:57,905 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$H5.class=0A= 2015-02-21 17:54:57,905 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$H6.class=0A= 2015-02-21 17:54:57,905 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$PRE.class=0A= 2015-02-21 17:54:57,905 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Q.class=0A= 2015-02-21 17:54:57,905 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$BLOCKQUOTE.class=0A= 2015-02-21 17:54:57,905 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$INS.class=0A= 2015-02-21 17:54:57,906 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$DEL.class=0A= 2015-02-21 17:54:57,906 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Dl.class=0A= 2015-02-21 17:54:57,906 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$DL.class=0A= 2015-02-21 17:54:57,906 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$DT.class=0A= 2015-02-21 17:54:57,906 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$DD.class=0A= 2015-02-21 17:54:57,906 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Li.class=0A= 2015-02-21 17:54:57,906 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$OL.class=0A= 2015-02-21 17:54:57,906 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$UL.class=0A= 2015-02-21 17:54:57,906 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$LI.class=0A= 2015-02-21 17:54:57,906 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$FORM.class=0A= 2015-02-21 17:54:57,906 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$LABEL.class=0A= 2015-02-21 17:54:57,906 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$INPUT.class=0A= 2015-02-21 17:54:57,906 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Option.class=0A= 2015-02-21 17:54:57,906 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$SELECT.class=0A= 2015-02-21 17:54:57,907 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$OPTGROUP.class=0A= 2015-02-21 17:54:57,907 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$OPTION.class=0A= 2015-02-21 17:54:57,907 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TEXTAREA.class=0A= 2015-02-21 17:54:57,907 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Legend.class=0A= 2015-02-21 17:54:57,907 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$FIELDSET.class=0A= 2015-02-21 17:54:57,907 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$LEGEND.class=0A= 2015-02-21 17:54:57,907 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$BUTTON.class=0A= 2015-02-21 17:54:57,907 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_TableRow.class=0A= 2015-02-21 17:54:57,907 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_TableCol.class=0A= 2015-02-21 17:54:57,907 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Table.class=0A= 2015-02-21 17:54:57,907 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TABLE.class=0A= 2015-02-21 17:54:57,907 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$CAPTION.class=0A= 2015-02-21 17:54:57,907 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$THEAD.class=0A= 2015-02-21 17:54:57,908 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TFOOT.class=0A= 2015-02-21 17:54:57,908 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TBODY.class=0A= 2015-02-21 17:54:57,908 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$COLGROUP.class=0A= 2015-02-21 17:54:57,908 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$COL.class=0A= 2015-02-21 17:54:57,908 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Tr.class=0A= 2015-02-21 17:54:57,908 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TR.class=0A= 2015-02-21 17:54:57,908 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Cell.class=0A= 2015-02-21 17:54:57,908 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TH.class=0A= 2015-02-21 17:54:57,908 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TD.class=0A= 2015-02-21 17:54:57,908 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Head.class=0A= 2015-02-21 17:54:57,908 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$HEAD.class=0A= 2015-02-21 17:54:57,908 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TITLE.class=0A= 2015-02-21 17:54:57,908 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$BASE.class=0A= 2015-02-21 17:54:57,908 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$META.class=0A= 2015-02-21 17:54:57,908 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$STYLE.class=0A= 2015-02-21 17:54:57,909 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$SCRIPT.class=0A= 2015-02-21 17:54:57,909 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Html.class=0A= 2015-02-21 17:54:57,909 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$HTML.class=0A= 2015-02-21 17:54:57,909 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec.class=0A= 2015-02-21 17:54:57,909 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$HTML.class=0A= 2015-02-21 17:54:57,909 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$SCRIPT.class=0A= 2015-02-21 17:54:57,910 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$STYLE.class=0A= 2015-02-21 17:54:57,910 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$META.class=0A= 2015-02-21 17:54:57,910 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$BASE.class=0A= 2015-02-21 17:54:57,910 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TITLE.class=0A= 2015-02-21 17:54:57,910 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$HEAD.class=0A= 2015-02-21 17:54:57,910 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TD.class=0A= 2015-02-21 17:54:57,911 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TH.class=0A= 2015-02-21 17:54:57,912 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TR.class=0A= 2015-02-21 17:54:57,912 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$COL.class=0A= 2015-02-21 17:54:57,912 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$COLGROUP.class=0A= 2015-02-21 17:54:57,912 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TBODY.class=0A= 2015-02-21 17:54:57,913 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TFOOT.class=0A= 2015-02-21 17:54:57,913 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$THEAD.class=0A= 2015-02-21 17:54:57,913 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$CAPTION.class=0A= 2015-02-21 17:54:57,913 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TABLE.class=0A= 2015-02-21 17:54:57,914 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$BUTTON.class=0A= 2015-02-21 17:54:57,914 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$LEGEND.class=0A= 2015-02-21 17:54:57,915 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$FIELDSET.class=0A= 2015-02-21 17:54:57,915 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TEXTAREA.class=0A= 2015-02-21 17:54:57,916 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$OPTION.class=0A= 2015-02-21 17:54:57,916 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$OPTGROUP.class=0A= 2015-02-21 17:54:57,916 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$SELECT.class=0A= 2015-02-21 17:54:57,916 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$INPUT.class=0A= 2015-02-21 17:54:57,916 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$LABEL.class=0A= 2015-02-21 17:54:57,917 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$FORM.class=0A= 2015-02-21 17:54:57,917 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$LI.class=0A= 2015-02-21 17:54:57,918 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$UL.class=0A= 2015-02-21 17:54:57,918 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$OL.class=0A= 2015-02-21 17:54:57,918 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$DD.class=0A= 2015-02-21 17:54:57,919 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$DT.class=0A= 2015-02-21 17:54:57,919 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$DL.class=0A= 2015-02-21 17:54:57,919 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$DEL.class=0A= 2015-02-21 17:54:57,920 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$INS.class=0A= 2015-02-21 17:54:57,921 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$BLOCKQUOTE.class=0A= 2015-02-21 17:54:57,921 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$Q.class=0A= 2015-02-21 17:54:57,922 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$PRE.class=0A= 2015-02-21 17:54:57,922 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$H6.class=0A= 2015-02-21 17:54:57,923 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$H5.class=0A= 2015-02-21 17:54:57,923 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$H4.class=0A= 2015-02-21 17:54:57,924 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$H3.class=0A= 2015-02-21 17:54:57,924 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$H2.class=0A= 2015-02-21 17:54:57,925 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$H1.class=0A= 2015-02-21 17:54:57,925 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$P.class=0A= 2015-02-21 17:54:57,926 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$HR.class=0A= 2015-02-21 17:54:57,926 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$PARAM.class=0A= 2015-02-21 17:54:57,926 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$OBJECT.class=0A= 2015-02-21 17:54:57,926 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$IMG.class=0A= 2015-02-21 17:54:57,927 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$LINK.class=0A= 2015-02-21 17:54:57,927 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$AREA.class=0A= 2015-02-21 17:54:57,927 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$MAP.class=0A= 2015-02-21 17:54:57,927 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$A.class=0A= 2015-02-21 17:54:57,928 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$DIV.class=0A= 2015-02-21 17:54:57,928 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$ADDRESS.class=0A= 2015-02-21 17:54:57,929 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$BODY.class=0A= 2015-02-21 17:54:57,929 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$BR.class=0A= 2015-02-21 17:54:57,929 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$BDO.class=0A= 2015-02-21 17:54:57,930 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$SPAN.class=0A= 2015-02-21 17:54:57,930 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$SUP.class=0A= 2015-02-21 17:54:57,931 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$SUB.class=0A= 2015-02-21 17:54:57,931 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$ACRONYM.class=0A= 2015-02-21 17:54:57,932 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$ABBR.class=0A= 2015-02-21 17:54:57,932 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$CITE.class=0A= 2015-02-21 17:54:57,933 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$VAR.class=0A= 2015-02-21 17:54:57,933 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$KBD.class=0A= 2015-02-21 17:54:57,934 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$SAMP.class=0A= 2015-02-21 17:54:57,934 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$CODE.class=0A= 2015-02-21 17:54:57,935 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$DFN.class=0A= 2015-02-21 17:54:57,935 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$STRONG.class=0A= 2015-02-21 17:54:57,936 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$EM.class=0A= 2015-02-21 17:54:57,936 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$SMALL.class=0A= 2015-02-21 17:54:57,937 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$B.class=0A= 2015-02-21 17:54:57,937 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$I.class=0A= 2015-02-21 17:54:57,938 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet.class=0A= 2015-02-21 17:54:57,938 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletGen.class=0A= 2015-02-21 17:54:57,939 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/log/package-info.class=0A= 2015-02-21 17:54:57,939 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/log/AggregatedLogsPage.class=0A= 2015-02-21 17:54:57,939 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/log/AggregatedLogsBlock$LogLimits.class=0A= 2015-02-21 17:54:57,939 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/log/AggregatedLogsBlock$1.class=0A= 2015-02-21 17:54:57,939 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/log/AggregatedLogsBlock.class=0A= 2015-02-21 17:54:57,939 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/log/AggregatedLogsNavBlock.class=0A= 2015-02-21 17:54:57,939 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/package-info.class=0A= 2015-02-21 17:54:57,940 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/MyApp$MyController.class=0A= 2015-02-21 17:54:57,940 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/MyApp$MyView.class=0A= 2015-02-21 17:54:57,940 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/MyApp.class=0A= 2015-02-21 17:54:57,940 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/HelloWorld$Hello.class=0A= 2015-02-21 17:54:57,940 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/HelloWorld$HelloView.class=0A= 2015-02-21 17:54:57,940 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/HelloWorld.class=0A= 2015-02-21 17:54:57,940 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/DefaultWrapperServlet$1.class=0A= 2015-02-21 17:54:57,940 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/DefaultWrapperServlet.class=0A= 2015-02-21 17:54:57,940 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/View$ViewContext.class=0A= 2015-02-21 17:54:57,940 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/View.class=0A= 2015-02-21 17:54:57,940 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/Params.class=0A= 2015-02-21 17:54:57,940 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/ResponseInfo$Item.class=0A= 2015-02-21 17:54:57,941 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/ResponseInfo.class=0A= 2015-02-21 17:54:57,941 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/SubView.class=0A= 2015-02-21 17:54:57,941 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/Controller$RequestContext.class=0A= 2015-02-21 17:54:57,941 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/Controller.class=0A= 2015-02-21 17:54:57,941 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/ToJSON.class=0A= 2015-02-21 17:54:57,941 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/BadRequestException.class=0A= 2015-02-21 17:54:57,941 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/YarnJacksonJaxbJsonProvider.class=0A= 2015-02-21 17:54:57,941 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/YarnWebParams.class=0A= 2015-02-21 17:54:57,941 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebApp$HTTP.class=0A= 2015-02-21 17:54:57,941 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebApp.class=0A= 2015-02-21 17:54:57,942 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/Router$Dest.class=0A= 2015-02-21 17:54:57,942 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/Router.class=0A= 2015-02-21 17:54:57,942 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/RemoteExceptionData.class=0A= 2015-02-21 17:54:57,942 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/NotFoundException.class=0A= 2015-02-21 17:54:57,942 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/MimeType.class=0A= 2015-02-21 17:54:57,942 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebApps$Builder$ServletStruct.class=0A= 2015-02-21 17:54:57,942 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebApps$Builder$1.class=0A= 2015-02-21 17:54:57,942 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebApps$Builder$2.class=0A= 2015-02-21 17:54:57,942 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebApps$Builder.class=0A= 2015-02-21 17:54:57,943 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebApps.class=0A= 2015-02-21 17:54:57,943 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/Dispatcher$1.class=0A= 2015-02-21 17:54:57,943 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/Dispatcher.class=0A= 2015-02-21 17:54:57,943 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/GenericExceptionHandler.class=0A= 2015-02-21 17:54:57,943 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebAppException.class=0A= 2015-02-21 17:54:57,943 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/util/WebAppUtils.class=0A= 2015-02-21 17:54:57,944 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/package-info.class=0A= 2015-02-21 17:54:57,944 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientTimelineSecurityInfo$1.class=0A= 2015-02-21 17:54:57,944 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientTimelineSecurityInfo$2.class=0A= 2015-02-21 17:54:57,944 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientTimelineSecurityInfo.class=0A= 2015-02-21 17:54:57,944 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/TimelineDelegationTokenOperation.c= lass=0A= 2015-02-21 17:54:57,944 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/TimelineAuthenticationConsts.class=0A= 2015-02-21 17:54:57,944 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientRMSecurityInfo$1.class=0A= 2015-02-21 17:54:57,944 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientRMSecurityInfo$2.class=0A= 2015-02-21 17:54:57,944 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientRMSecurityInfo.class=0A= 2015-02-21 17:54:57,944 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/BaseClientToAMTokenSecretManager.c= lass=0A= 2015-02-21 17:54:57,944 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientToAMTokenIdentifier$Renewer.= class=0A= 2015-02-21 17:54:57,945 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientToAMTokenIdentifier.class=0A= 2015-02-21 17:54:57,945 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/TimelineDelegationTokenSelector.cl= ass=0A= 2015-02-21 17:54:57,945 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/TimelineDelegationTokenIdentifier$= Renewer.class=0A= 2015-02-21 17:54:57,945 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/TimelineDelegationTokenIdentifier.= class=0A= 2015-02-21 17:54:57,945 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/RMDelegationTokenIdentifier$Renewe= r.class=0A= 2015-02-21 17:54:57,945 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/RMDelegationTokenIdentifier.class=0A= 2015-02-21 17:54:57,945 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/RMDelegationTokenSelector.class=0A= 2015-02-21 17:54:57,945 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientToAMTokenSecretManager.class=0A= 2015-02-21 17:54:57,945 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientToAMTokenSelector.class=0A= 2015-02-21 17:54:57,945 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/package-info.class=0A= 2015-02-21 17:54:57,946 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/admin/package-info.class=0A= 2015-02-21 17:54:57,946 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/admin/AdminSecurityInfo$1.class=0A= 2015-02-21 17:54:57,946 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/admin/AdminSecurityInfo.class=0A= 2015-02-21 17:54:57,946 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/AMRMTokenIdentifier$Renewer.class=0A= 2015-02-21 17:54:57,946 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/AMRMTokenIdentifier.class=0A= 2015-02-21 17:54:57,946 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/ContainerTokenIdentifier$Renewer.class=0A= 2015-02-21 17:54:57,946 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/ContainerTokenIdentifier.class=0A= 2015-02-21 17:54:57,946 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/SchedulerSecurityInfo$1.class=0A= 2015-02-21 17:54:57,946 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/SchedulerSecurityInfo.class=0A= 2015-02-21 17:54:57,946 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/AMRMTokenSelector.class=0A= 2015-02-21 17:54:57,946 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/AdminACLsManager.class=0A= 2015-02-21 17:54:57,947 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/ContainerManagerSecurityInfo$1.class=0A= 2015-02-21 17:54:57,947 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/ContainerManagerSecurityInfo.class=0A= 2015-02-21 17:54:57,947 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/NMTokenIdentifier.class=0A= 2015-02-21 17:54:57,947 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/NMTokenSelector.class=0A= 2015-02-21 17:54:57,947 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/ContainerTokenSelector.class=0A= 2015-02-21 17:54:57,947 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/service/package-info.class=0A= 2015-02-21 17:54:57,947 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/service/ContainerManagementProtocolPBS= erviceImpl.class=0A= 2015-02-21 17:54:57,947 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/service/ApplicationMasterProtocolPBSer= viceImpl.class=0A= 2015-02-21 17:54:57,947 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/service/ApplicationHistoryProtocolPBSe= rviceImpl.class=0A= 2015-02-21 17:54:57,948 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/service/ApplicationClientProtocolPBSer= viceImpl.class=0A= 2015-02-21 17:54:57,948 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/client/package-info.class=0A= 2015-02-21 17:54:57,948 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/client/ContainerManagementProtocolPBCl= ientImpl.class=0A= 2015-02-21 17:54:57,948 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/client/ApplicationMasterProtocolPBClie= ntImpl.class=0A= 2015-02-21 17:54:57,948 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/client/ApplicationHistoryProtocolPBCli= entImpl.class=0A= 2015-02-21 17:54:57,948 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/client/ApplicationClientProtocolPBClie= ntImpl.class=0A= 2015-02-21 17:54:57,949 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/package-info.class=0A= 2015-02-21 17:54:57,949 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terRequestPBImpl.class=0A= 2015-02-21 17:54:57,949 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationAttemptR= eportRequestPBImpl.class=0A= 2015-02-21 17:54:57,949 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetNewApplicationReque= stPBImpl.class=0A= 2015-02-21 17:54:57,949 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainersRequestPB= Impl.class=0A= 2015-02-21 17:54:57,949 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/SubmitApplicationReque= stPBImpl.class=0A= 2015-02-21 17:54:57,949 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationAttempts= RequestPBImpl.class=0A= 2015-02-21 17:54:57,950 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationReportRe= sponsePBImpl.class=0A= 2015-02-21 17:54:57,950 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/CancelDelegationTokenR= esponsePBImpl.class=0A= 2015-02-21 17:54:57,950 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $1$1.class=0A= 2015-02-21 17:54:57,950 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $1.class=0A= 2015-02-21 17:54:57,950 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $2$1.class=0A= 2015-02-21 17:54:57,950 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $2.class=0A= 2015-02-21 17:54:57,950 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $3$1.class=0A= 2015-02-21 17:54:57,950 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $3.class=0A= 2015-02-21 17:54:57,950 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $4$1.class=0A= 2015-02-21 17:54:57,950 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $4.class=0A= 2015-02-21 17:54:57,951 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $5$1.class=0A= 2015-02-21 17:54:57,951 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $5.class=0A= 2015-02-21 17:54:57,951 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $6$1.class=0A= 2015-02-21 17:54:57,951 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $6.class=0A= 2015-02-21 17:54:57,951 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= .class=0A= 2015-02-21 17:54:57,951 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terResponsePBImpl$1$1.class=0A= 2015-02-21 17:54:57,951 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terResponsePBImpl$1.class=0A= 2015-02-21 17:54:57,951 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terResponsePBImpl$2$1.class=0A= 2015-02-21 17:54:57,952 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terResponsePBImpl$2.class=0A= 2015-02-21 17:54:57,952 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terResponsePBImpl$3$1.class=0A= 2015-02-21 17:54:57,952 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terResponsePBImpl$3.class=0A= 2015-02-21 17:54:57,952 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terResponsePBImpl.class=0A= 2015-02-21 17:54:57,952 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RenewDelegationTokenRe= sponsePBImpl.class=0A= 2015-02-21 17:54:57,952 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetQueueInfoResponsePB= Impl.class=0A= 2015-02-21 17:54:57,952 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl$= 1$1.class=0A= 2015-02-21 17:54:57,952 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl$= 1.class=0A= 2015-02-21 17:54:57,953 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl$= 2$1.class=0A= 2015-02-21 17:54:57,953 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl$= 2.class=0A= 2015-02-21 17:54:57,953 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl$= 3$1.class=0A= 2015-02-21 17:54:57,953 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl$= 3.class=0A= 2015-02-21 17:54:57,953 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl.= class=0A= 2015-02-21 17:54:57,953 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetNewApplicationRespo= nsePBImpl.class=0A= 2015-02-21 17:54:57,953 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetDelegationTokenResp= onsePBImpl.class=0A= 2015-02-21 17:54:57,953 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainersRespons= ePBImpl$1$1.class=0A= 2015-02-21 17:54:57,954 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainersRespons= ePBImpl$1.class=0A= 2015-02-21 17:54:57,954 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainersRespons= ePBImpl$2$1.class=0A= 2015-02-21 17:54:57,954 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainersRespons= ePBImpl$2.class=0A= 2015-02-21 17:54:57,954 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainersRespons= ePBImpl.class=0A= 2015-02-21 17:54:57,954 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetDelegationTokenRequ= estPBImpl.class=0A= 2015-02-21 17:54:57,954 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StopContainersResponse= PBImpl$1$1.class=0A= 2015-02-21 17:54:57,954 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StopContainersResponse= PBImpl$1.class=0A= 2015-02-21 17:54:57,954 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StopContainersResponse= PBImpl.class=0A= 2015-02-21 17:54:57,955 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainersRequest= PBImpl.class=0A= 2015-02-21 17:54:57,955 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainerRequestP= BImpl.class=0A= 2015-02-21 17:54:57,955 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationAttempts= ResponsePBImpl$1$1.class=0A= 2015-02-21 17:54:57,955 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationAttempts= ResponsePBImpl$1.class=0A= 2015-02-21 17:54:57,955 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationAttempts= ResponsePBImpl.class=0A= 2015-02-21 17:54:57,955 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainerStatusesRe= questPBImpl.class=0A= 2015-02-21 17:54:57,955 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequest= PBImpl$1$1.class=0A= 2015-02-21 17:54:57,955 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequest= PBImpl$1.class=0A= 2015-02-21 17:54:57,956 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequest= PBImpl.class=0A= 2015-02-21 17:54:57,956 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainersResponseP= BImpl$1$1.class=0A= 2015-02-21 17:54:57,956 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainersResponseP= BImpl$1.class=0A= 2015-02-21 17:54:57,956 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainersResponseP= BImpl.class=0A= 2015-02-21 17:54:57,956 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationAttemptR= eportResponsePBImpl.class=0A= 2015-02-21 17:54:57,956 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterMetricsReque= stPBImpl.class=0A= 2015-02-21 17:54:57,956 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetQueueUserAclsInfoRe= questPBImpl.class=0A= 2015-02-21 17:54:57,957 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/MoveApplicationAcrossQ= ueuesResponsePBImpl.class=0A= 2015-02-21 17:54:57,957 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/FinishApplicationMaste= rRequestPBImpl.class=0A= 2015-02-21 17:54:57,957 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterMetricsRespo= nsePBImpl.class=0A= 2015-02-21 17:54:57,957 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainerReportResp= onsePBImpl.class=0A= 2015-02-21 17:54:57,957 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationReportRe= questPBImpl.class=0A= 2015-02-21 17:54:57,957 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterNodesRespons= ePBImpl$1$1.class=0A= 2015-02-21 17:54:57,957 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterNodesRespons= ePBImpl$1.class=0A= 2015-02-21 17:54:57,957 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterNodesRespons= ePBImpl.class=0A= 2015-02-21 17:54:57,958 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/MoveApplicationAcrossQ= ueuesRequestPBImpl.class=0A= 2015-02-21 17:54:57,958 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/FinishApplicationMaste= rResponsePBImpl.class=0A= 2015-02-21 17:54:57,958 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/KillApplicationRespons= ePBImpl.class=0A= 2015-02-21 17:54:57,958 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetQueueUserAclsInfoRe= sponsePBImpl$1$1.class=0A= 2015-02-21 17:54:57,958 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetQueueUserAclsInfoRe= sponsePBImpl$1.class=0A= 2015-02-21 17:54:57,958 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetQueueUserAclsInfoRe= sponsePBImpl.class=0A= 2015-02-21 17:54:57,958 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainerStatusesRe= sponsePBImpl.class=0A= 2015-02-21 17:54:57,959 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RenewDelegationTokenRe= questPBImpl.class=0A= 2015-02-21 17:54:57,959 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/CancelDelegationTokenR= equestPBImpl.class=0A= 2015-02-21 17:54:57,959 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRespons= ePBImpl$1$1.class=0A= 2015-02-21 17:54:57,959 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRespons= ePBImpl$1.class=0A= 2015-02-21 17:54:57,959 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRespons= ePBImpl.class=0A= 2015-02-21 17:54:57,959 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterNodesRequest= PBImpl$1$1.class=0A= 2015-02-21 17:54:57,959 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterNodesRequest= PBImpl$1.class=0A= 2015-02-21 17:54:57,959 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterNodesRequest= PBImpl.class=0A= 2015-02-21 17:54:57,959 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetQueueInfoRequestPBI= mpl.class=0A= 2015-02-21 17:54:57,960 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/SubmitApplicationRespo= nsePBImpl.class=0A= 2015-02-21 17:54:57,960 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/KillApplicationRequest= PBImpl.class=0A= 2015-02-21 17:54:57,960 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainerReportRequ= estPBImpl.class=0A= 2015-02-21 17:54:57,960 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StopContainersRequestP= BImpl.class=0A= 2015-02-21 17:54:57,960 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/package-info.class=0A= 2015-02-21 17:54:57,960 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.class=0A= 2015-02-21 17:54:57,960 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PriorityPBImpl.class=0A= 2015-02-21 17:54:57,961 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ResourcePBImpl.class=0A= 2015-02-21 17:54:57,961 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ApplicationAttemptIdPBImpl.cla= ss=0A= 2015-02-21 17:54:57,961 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ApplicationIdPBImpl.class=0A= 2015-02-21 17:54:57,961 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContractPBImpl$1$1.c= lass=0A= 2015-02-21 17:54:57,961 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContractPBImpl$1.cla= ss=0A= 2015-02-21 17:54:57,961 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContractPBImpl$2$1.c= lass=0A= 2015-02-21 17:54:57,961 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContractPBImpl$2.cla= ss=0A= 2015-02-21 17:54:57,961 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContractPBImpl.class=0A= 2015-02-21 17:54:57,961 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContainerPBImpl.clas= s=0A= 2015-02-21 17:54:57,962 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionResourceRequestPBImp= l.class=0A= 2015-02-21 17:54:57,962 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerIdPBImpl.class=0A= 2015-02-21 17:54:57,962 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ApplicationReportPBImpl.class=0A= 2015-02-21 17:54:57,962 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/TokenPBImpl.class=0A= 2015-02-21 17:54:57,962 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPB= Impl.class=0A= 2015-02-21 17:54:57,963 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$1= $1.class=0A= 2015-02-21 17:54:57,963 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$1= .class=0A= 2015-02-21 17:54:57,963 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$2= $1.class=0A= 2015-02-21 17:54:57,963 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$2= .class=0A= 2015-02-21 17:54:57,963 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$3= $1.class=0A= 2015-02-21 17:54:57,963 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$3= .class=0A= 2015-02-21 17:54:57,963 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$4= $1.class=0A= 2015-02-21 17:54:57,963 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$4= .class=0A= 2015-02-21 17:54:57,963 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.c= lass=0A= 2015-02-21 17:54:57,964 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/LocalResourcePBImpl.class=0A= 2015-02-21 17:54:57,964 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/URLPBImpl.class=0A= 2015-02-21 17:54:57,964 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ResourceOptionPBImpl.class=0A= 2015-02-21 17:54:57,964 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/NodeIdPBImpl.class=0A= 2015-02-21 17:54:57,964 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueInfoPBImpl$1$1.class=0A= 2015-02-21 17:54:57,964 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueInfoPBImpl$1.class=0A= 2015-02-21 17:54:57,964 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueInfoPBImpl$2$1.class=0A= 2015-02-21 17:54:57,964 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueInfoPBImpl$2.class=0A= 2015-02-21 17:54:57,964 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueInfoPBImpl.class=0A= 2015-02-21 17:54:57,965 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/NodeReportPBImpl.class=0A= 2015-02-21 17:54:57,965 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerPBImpl.class=0A= 2015-02-21 17:54:57,965 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerStatusPBImpl.class=0A= 2015-02-21 17:54:57,965 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionMessagePBImpl.class=0A= 2015-02-21 17:54:57,965 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/StrictPreemptionContractPBImpl= $1$1.class=0A= 2015-02-21 17:54:57,966 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/StrictPreemptionContractPBImpl= $1.class=0A= 2015-02-21 17:54:57,966 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/StrictPreemptionContractPBImpl= .class=0A= 2015-02-21 17:54:57,966 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/SerializedExceptionPBImpl.clas= s=0A= 2015-02-21 17:54:57,966 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ProtoBase.class=0A= 2015-02-21 17:54:57,966 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerResourceIncreaseReque= stPBImpl.class=0A= 2015-02-21 17:54:57,966 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ResourceBlacklistRequestPBImpl= .class=0A= 2015-02-21 17:54:57,966 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ProtoUtils.class=0A= 2015-02-21 17:54:57,967 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ApplicationAttemptReportPBImpl= .class=0A= 2015-02-21 17:54:57,967 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerResourceDecreasePBImp= l.class=0A= 2015-02-21 17:54:57,967 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerReportPBImpl.class=0A= 2015-02-21 17:54:57,967 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/YarnClusterMetricsPBImpl.class=0A= 2015-02-21 17:54:57,967 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ApplicationResourceUsageReport= PBImpl.class=0A= 2015-02-21 17:54:57,967 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueUserACLInfoPBImpl$1$1.cla= ss=0A= 2015-02-21 17:54:57,967 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueUserACLInfoPBImpl$1.class=0A= 2015-02-21 17:54:57,968 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueUserACLInfoPBImpl.class=0A= 2015-02-21 17:54:57,968 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/NMTokenPBImpl.class=0A= 2015-02-21 17:54:57,968 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerResourceIncreasePBImp= l.class=0A= 2015-02-21 17:54:57,968 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/ApplicationClientProtocolPB.class=0A= 2015-02-21 17:54:57,968 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/ContainerManagementProtocolPB.class=0A= 2015-02-21 17:54:57,968 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/ApplicationMasterProtocolPB.class=0A= 2015-02-21 17:54:57,968 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/ApplicationHistoryProtocolPB.class=0A= 2015-02-21 17:54:57,968 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/package-info.cl= ass=0A= 2015-02-21 17:54:57,968 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshUserToGr= oupsMappingsResponsePBImpl.class=0A= 2015-02-21 17:54:57,969 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshSuperUse= rGroupsConfigurationResponsePBImpl.class=0A= 2015-02-21 17:54:57,969 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshServiceA= clsRequestPBImpl.class=0A= 2015-02-21 17:54:57,969 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshServiceA= clsResponsePBImpl.class=0A= 2015-02-21 17:54:57,969 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/UpdateNodeResou= rceRequestPBImpl$1$1.class=0A= 2015-02-21 17:54:57,969 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/UpdateNodeResou= rceRequestPBImpl$1.class=0A= 2015-02-21 17:54:57,969 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/UpdateNodeResou= rceRequestPBImpl.class=0A= 2015-02-21 17:54:57,969 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshSuperUse= rGroupsConfigurationRequestPBImpl.class=0A= 2015-02-21 17:54:57,969 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshQueuesRe= questPBImpl.class=0A= 2015-02-21 17:54:57,969 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshUserToGr= oupsMappingsRequestPBImpl.class=0A= 2015-02-21 17:54:57,970 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshNodesRes= ponsePBImpl.class=0A= 2015-02-21 17:54:57,970 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshAdminAcl= sResponsePBImpl.class=0A= 2015-02-21 17:54:57,970 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/UpdateNodeResou= rceResponsePBImpl.class=0A= 2015-02-21 17:54:57,970 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshAdminAcl= sRequestPBImpl.class=0A= 2015-02-21 17:54:57,970 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshQueuesRe= sponsePBImpl.class=0A= 2015-02-21 17:54:57,970 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshNodesReq= uestPBImpl.class=0A= 2015-02-21 17:54:57,970 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/ResourceManagerAdministrationProtocolPB= .class=0A= 2015-02-21 17:54:57,970 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/impl/pb/client/ResourceManagerAdministr= ationProtocolPBClientImpl.class=0A= 2015-02-21 17:54:57,971 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/impl/pb/service/ResourceManagerAdminist= rationProtocolPBServiceImpl.class=0A= 2015-02-21 17:54:57,971 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/security/package-info.class=0A= 2015-02-21 17:54:57,971 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/security/ApplicationACLsManager.class=0A= 2015-02-21 17:54:57,971 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/package-info.class=0A= 2015-02-21 17:54:57,971 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/AbstractEvent.class=0A= 2015-02-21 17:54:57,971 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/Event.class=0A= 2015-02-21 17:54:57,971 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/EventHandler.class=0A= 2015-02-21 17:54:57,971 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/Dispatcher.class=0A= 2015-02-21 17:54:57,971 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/AsyncDispatcher$1.class=0A= 2015-02-21 17:54:57,971 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/AsyncDispatcher$GenericEventHandler.class=0A= 2015-02-21 17:54:57,972 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/AsyncDispatcher$MultiListenerHandler.class=0A= 2015-02-21 17:54:57,972 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/AsyncDispatcher.class=0A= 2015-02-21 17:54:57,972 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/ipc/package-info.class=0A= 2015-02-21 17:54:57,972 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/ipc/RPCUtil.class=0A= 2015-02-21 17:54:57,972 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/ipc/YarnRPC.class=0A= 2015-02-21 17:54:57,972 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/ipc/HadoopYarnProtoRPC.class=0A= 2015-02-21 17:54:57,972 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/timeline/package-info.class=0A= 2015-02-21 17:54:57,972 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/timeline/TimelineUtils.class=0A= 2015-02-21 17:54:57,972 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/package-info.class=0A= 2015-02-21 17:54:57,972 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/StringHelper.class=0A= 2015-02-21 17:54:57,973 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ConverterUtils.class=0A= 2015-02-21 17:54:57,973 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/FSDownload$1.class=0A= 2015-02-21 17:54:57,973 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/FSDownload$2.class=0A= 2015-02-21 17:54:57,973 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/FSDownload$3.class=0A= 2015-02-21 17:54:57,973 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/FSDownload$4.class=0A= 2015-02-21 17:54:57,973 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/FSDownload.class=0A= 2015-02-21 17:54:57,973 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/RMHAUtils.class=0A= 2015-02-21 17:54:57,974 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ProcfsBasedProcessTree$MemInfo.class=0A= 2015-02-21 17:54:57,974 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ProcfsBasedProcessTree$ProcessInfo.class=0A= 2015-02-21 17:54:57,974 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ProcfsBasedProcessTree$ProcessTreeSmapMemInfo= .class=0A= 2015-02-21 17:54:57,974 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ProcfsBasedProcessTree$ProcessSmapMemoryInfo.= class=0A= 2015-02-21 17:54:57,974 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ProcfsBasedProcessTree$1.class=0A= 2015-02-21 17:54:57,975 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.class=0A= 2015-02-21 17:54:57,975 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ResourceCalculatorProcessTree.class=0A= 2015-02-21 17:54:57,975 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/Times$1.class=0A= 2015-02-21 17:54:57,975 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/Times.class=0A= 2015-02-21 17:54:57,975 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/RackResolver.class=0A= 2015-02-21 17:54:57,975 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ResourceCalculatorPlugin.class=0A= 2015-02-21 17:54:57,975 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/WindowsResourceCalculatorPlugin.class=0A= 2015-02-21 17:54:57,976 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/WindowsBasedProcessTree$ProcessInfo.class=0A= 2015-02-21 17:54:57,976 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/WindowsBasedProcessTree.class=0A= 2015-02-21 17:54:57,976 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.class=0A= 2015-02-21 17:54:57,976 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/resource/ResourceCalculator.class=0A= 2015-02-21 17:54:57,976 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/resource/Resources$1.class=0A= 2015-02-21 17:54:57,976 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/resource/Resources$2.class=0A= 2015-02-21 17:54:57,976 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/resource/Resources.class=0A= 2015-02-21 17:54:57,976 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.class=0A= 2015-02-21 17:54:57,976 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/Clock.class=0A= 2015-02-21 17:54:57,976 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/SystemClock.class=0A= 2015-02-21 17:54:57,977 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/Apps.class=0A= 2015-02-21 17:54:57,977 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/TrackingUriPlugin.class=0A= 2015-02-21 17:54:57,977 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/YarnVersionInfo.class=0A= 2015-02-21 17:54:57,977 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/LinuxResourceCalculatorPlugin.class=0A= 2015-02-21 17:54:57,977 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/AbstractLivelinessMonitor$PingChecker.class=0A= 2015-02-21 17:54:57,977 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/AbstractLivelinessMonitor$1.class=0A= 2015-02-21 17:54:57,977 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/AbstractLivelinessMonitor.class=0A= 2015-02-21 17:54:57,977 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ApplicationClassLoader$1.class=0A= 2015-02-21 17:54:57,977 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ApplicationClassLoader.class=0A= 2015-02-21 17:54:57,978 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/AuxiliaryServiceHelper.class=0A= 2015-02-21 17:54:57,978 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/package-info.class=0A= 2015-02-21 17:54:57,978 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachine.class=0A= 2015-02-21 17:54:57,978 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/InvalidStateTransitonException.class=0A= 2015-02-21 17:54:57,978 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/SingleArcTransition.class=0A= 2015-02-21 17:54:57,978 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/VisualizeStateMachine.class=0A= 2015-02-21 17:54:57,978 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/Graph$Edge.class=0A= 2015-02-21 17:54:57,978 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/Graph$Node.class=0A= 2015-02-21 17:54:57,978 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/Graph.class=0A= 2015-02-21 17:54:57,978 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/MultipleArcTransition.class=0A= 2015-02-21 17:54:57,979 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory$ApplicableTransition.cla= ss=0A= 2015-02-21 17:54:57,979 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory$TransitionsListNode.clas= s=0A= 2015-02-21 17:54:57,979 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory$ApplicableSingleOrMultip= leTransition.class=0A= 2015-02-21 17:54:57,979 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory$Transition.class=0A= 2015-02-21 17:54:57,979 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory$SingleInternalArc.class=0A= 2015-02-21 17:54:57,979 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory$MultipleInternalArc.clas= s=0A= 2015-02-21 17:54:57,979 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory$InternalStateMachine.cla= ss=0A= 2015-02-21 17:54:57,979 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory.class=0A= 2015-02-21 17:54:57,979 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/impl/pb/package-info.class=0A= 2015-02-21 17:54:57,979 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/impl/pb/RecordFactoryPBImpl.class=0A= 2015-02-21 17:54:57,980 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/impl/pb/RpcClientFactoryPBImpl.class=0A= 2015-02-21 17:54:57,980 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/impl/pb/RpcServerFactoryPBImpl.class=0A= 2015-02-21 17:54:57,980 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/package-info.class=0A= 2015-02-21 17:54:57,980 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/RpcServerFactory.class=0A= 2015-02-21 17:54:57,980 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/RpcClientFactory.class=0A= 2015-02-21 17:54:57,980 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/package-info.class=0A= 2015-02-21 17:54:57,980 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogDeletionService$LogDel= etionTask.class=0A= 2015-02-21 17:54:57,980 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogDeletionService.class=0A= 2015-02-21 17:54:57,980 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat$LogKey.class=0A= 2015-02-21 17:54:57,981 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat$LogValue.class=0A= 2015-02-21 17:54:57,981 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat$LogWriter$1.cla= ss=0A= 2015-02-21 17:54:57,981 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat$LogWriter.class=0A= 2015-02-21 17:54:57,981 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat$LogReader.class=0A= 2015-02-21 17:54:57,981 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat$ContainerLogsRe= ader.class=0A= 2015-02-21 17:54:57,981 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.class=0A= 2015-02-21 17:54:57,981 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.class=0A= 2015-02-21 17:54:57,981 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/LogAggregationUtils.class=0A= 2015-02-21 17:54:57,981 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/LogCLIHelpers.class=0A= 2015-02-21 17:54:57,982 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factory/providers/package-info.class=0A= 2015-02-21 17:54:57,982 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factory/providers/RpcFactoryProvider.class=0A= 2015-02-21 17:54:57,982 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/package-info.class=0A= 2015-02-21 17:54:57,982 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/FileSystemBasedConfigurationProvider.class=0A= 2015-02-21 17:54:57,982 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/ConfiguredRMFailoverProxyProvider.class=0A= 2015-02-21 17:54:57,982 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/RMFailoverProxyProvider.class=0A= 2015-02-21 17:54:57,982 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/RMProxy$1.class=0A= 2015-02-21 17:54:57,982 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/RMProxy.class=0A= 2015-02-21 17:54:57,983 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/AHSProxy$1.class=0A= 2015-02-21 17:54:57,983 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/AHSProxy.class=0A= 2015-02-21 17:54:57,983 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/RMHAServiceTarget.class=0A= 2015-02-21 17:54:57,983 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/ClientRMProxy$ClientRMProtocols.class=0A= 2015-02-21 17:54:57,983 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/ClientRMProxy.class=0A= 2015-02-21 17:54:57,983 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/NMProxy.class=0A= 2015-02-21 17:54:57,983 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/ServerProxy$1.class=0A= 2015-02-21 17:54:57,983 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/ServerProxy.class=0A= 2015-02-21 17:54:57,983 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/LocalConfigurationProvider.class=0A= 2015-02-21 17:54:57,983 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/ContainerRollingLogAppender.class=0A= 2015-02-21 17:54:57,983 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/YarnUncaughtExceptionHandler.class=0A= 2015-02-21 17:54:57,984 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/ContainerLogAppender.class=0A= 2015-02-21 17:54:57,984 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/maven/=0A= 2015-02-21 17:54:57,984 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/maven/org.apache.hadoop/=0A= 2015-02-21 17:54:57,984 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/maven/org.apache.hadoop/hadoop-yarn-common/=0A= 2015-02-21 17:54:57,984 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/maven/org.apache.hadoop/hadoop-yarn-common/pom.xml=0A= 2015-02-21 17:54:57,984 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/maven/org.apache.hadoop/hadoop-yarn-common/pom.properties=0A= 2015-02-21 17:54:57,985 DEBUG [main] org.mortbay.log: Checking Resource = aliases=0A= 2015-02-21 17:54:57,985 DEBUG [main] org.mortbay.log: = webapp=3Dfile:/tmp/Jetty_0_0_0_0_40686_mapreduce____40v6rl/webapp/=0A= 2015-02-21 17:54:57,998 DEBUG [main] org.mortbay.log: = getResource(org/mortbay/jetty/webapp/webdefault.xml)=3Djar:file:/opt/clou= dera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jetty-6.1.26.cloudera.4.jar!= /org/mortbay/jetty/webapp/webdefault.xml=0A= 2015-02-21 17:54:57,998 DEBUG [main] org.mortbay.log: parse: = jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jetty-6.1.= 26.cloudera.4.jar!/org/mortbay/jetty/webapp/webdefault.xml=0A= 2015-02-21 17:54:58,000 DEBUG [main] org.mortbay.log: parsing: = sid=3Djar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jett= y-6.1.26.cloudera.4.jar!/org/mortbay/jetty/webapp/webdefault.xml,pid=3Dnu= ll=0A= 2015-02-21 17:54:58,010 DEBUG [main] org.mortbay.log: ContextParam: = org.mortbay.jetty.webapp.NoTLDJarPattern=3Dstart.jar|ant-.*\.jar|dojo-.*\= .jar|jetty-.*\.jar|jsp-api-.*\.jar|junit-.*\.jar|servlet-api-.*\.jar|dnsn= s\.jar|rt\.jar|jsse\.jar|tools\.jar|sunpkcs11\.jar|sunjce_provider\.jar|x= erces.*\.jar=0A= 2015-02-21 17:54:58,012 DEBUG [main] org.mortbay.log: loaded class = org.apache.jasper.servlet.JspServlet from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 17:54:58,014 DEBUG [main] org.mortbay.log: = filterNameMap=3D{guice=3Dguice, safety=3Dsafety, = NoCacheFilter=3DNoCacheFilter, AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:58,015 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15), (F=3DAM_PROXY_FILTER,[/conf],[],15), = (F=3DAM_PROXY_FILTER,[/mapreduce/*],[],15), = (F=3DAM_PROXY_FILTER,[/ws/*],[],15), (F=3Dguice,[/*],[],15)]=0A= 2015-02-21 17:54:58,015 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:58,015 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /conf=3Dconf, /jmx=3Djmx, = /stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 17:54:58,015 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 17:54:58,015 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + default as servlet=0A= 2015-02-21 17:54:58,015 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + jsp as servlet=0A= 2015-02-21 17:54:58,015 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (S=3Ddefault,[/]) as servletMapping=0A= 2015-02-21 17:54:58,015 DEBUG [main] org.mortbay.log: Container = ServletHandler@60fd097b + (S=3Djsp,[*.jsp, *.jspf, *.jspx, *.xsp, *.JSP, = *.JSPF, *.JSPX, *.XSP]) as servletMapping=0A= 2015-02-21 17:54:58,015 DEBUG [main] org.mortbay.log: = filterNameMap=3D{guice=3Dguice, safety=3Dsafety, = NoCacheFilter=3DNoCacheFilter, AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:58,015 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15), (F=3DAM_PROXY_FILTER,[/conf],[],15), = (F=3DAM_PROXY_FILTER,[/mapreduce/*],[],15), = (F=3DAM_PROXY_FILTER,[/ws/*],[],15), (F=3Dguice,[/*],[],15)]=0A= 2015-02-21 17:54:58,015 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:58,015 DEBUG [main] org.mortbay.log: = servletPathMap=3D{*.XSP=3Djsp, *.jsp=3Djsp, *.jspx=3Djsp, *.JSPF=3Djsp, = /conf=3Dconf, /=3Ddefault, *.xsp=3Djsp, /stacks=3Dstacks, = /logLevel=3DlogLevel, *.JSPX=3Djsp, *.jspf=3Djsp, /metrics=3Dmetrics, = /jmx=3Djmx, *.JSP=3Djsp}=0A= 2015-02-21 17:54:58,015 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jsp=3Djsp, default=3Ddefault, jmx=3Djmx, = metrics=3Dmetrics, logLevel=3DlogLevel, conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 17:54:58,016 DEBUG [main] org.mortbay.log: Configuring = web-jetty.xml=0A= 2015-02-21 17:54:58,016 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-aws-2.5= .0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,094 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-tools-= 1.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,095 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-common-= 2.5.0-cdh5.3.0-tests.jar=0A= 2015-02-21 17:54:58,100 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-format= -2.1.0-cdh5.3.0-sources.jar=0A= 2015-02-21 17:54:58,100 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-annotat= ions-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,100 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-column= -1.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,103 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-common= -1.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,103 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-test-h= adoop2-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,104 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-auth-2.= 5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,104 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-scroog= e_2.10-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,104 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-protob= uf-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,105 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-common-= 2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,112 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-genera= tor-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,113 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-hadoop= -1.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,113 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-encodi= ng-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,114 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-hadoop= -bundle-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,121 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-nfs-2.5= .0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,122 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-format= -2.1.0-cdh5.3.0-javadoc.jar=0A= 2015-02-21 17:54:58,122 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-scala_= 2.10-1.5.0-cdh5.4.0-SNAPSHOT.jar=0A= 2015-02-21 17:54:58,123 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-format= -2.1.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,124 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-avro-1= .5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,124 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-thrift= -1.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,125 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-cascad= ing-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,125 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-pig-1.= 5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,126 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-pig-bu= ndle-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,137 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-jackso= n-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,139 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jackson-core-a= sl-1.8.8.jar=0A= 2015-02-21 17:54:58,139 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jackson-xc-1.8= .8.jar=0A= 2015-02-21 17:54:58,139 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-config= uration-1.6.jar=0A= 2015-02-21 17:54:58,140 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/curator-recipe= s-2.6.0.jar=0A= 2015-02-21 17:54:58,144 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/httpclient-4.2= .5.jar=0A= 2015-02-21 17:54:58,144 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jasper-compile= r-5.5.23.jar=0A= 2015-02-21 17:54:58,145 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/snappy-java-1.= 0.4.1.jar=0A= 2015-02-21 17:54:58,145 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/netty-3.6.2.Fi= nal.jar=0A= 2015-02-21 17:54:58,150 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/log4j-1.2.17.j= ar=0A= 2015-02-21 17:54:58,150 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/slf4j-log4j12-= 1.7.5.jar=0A= 2015-02-21 17:54:58,150 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/httpcore-4.2.5= .jar=0A= 2015-02-21 17:54:58,151 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jersey-core-1.= 9.jar=0A= 2015-02-21 17:54:58,154 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-loggin= g-1.1.3.jar=0A= 2015-02-21 17:54:58,154 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/slf4j-api-1.7.= 5.jar=0A= 2015-02-21 17:54:58,155 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-el-1.0= .jar=0A= 2015-02-21 17:54:58,155 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/lib/aws-= java-sdk-1.7.4.jar=0A= 2015-02-21 17:54:58,166 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsr305-1.3.9.j= ar=0A= 2015-02-21 17:54:58,166 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/guava-11.0.2.j= ar=0A= 2015-02-21 17:54:58,175 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/avro-1.7.6-cdh= 5.3.0.jar=0A= 2015-02-21 17:54:58,176 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/curator-client= -2.6.0.jar=0A= 2015-02-21 17:54:58,176 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jasper-runtime= -5.5.23.jar=0A= 2015-02-21 17:54:58,177 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/apacheds-kerbe= ros-codec-2.0.0-M15.jar=0A= 2015-02-21 17:54:58,182 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/api-util-1.0.0= -M20.jar=0A= 2015-02-21 17:54:58,183 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jets3t-0.9.0.j= ar=0A= 2015-02-21 17:54:58,183 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hamcrest-core-= 1.3.jar=0A= 2015-02-21 17:54:58,183 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-io-2.4= .jar=0A= 2015-02-21 17:54:58,184 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jersey-server-= 1.9.jar=0A= 2015-02-21 17:54:58,184 DEBUG [main] org.mortbay.log: TLD found = jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jersey-ser= ver-1.9.jar!/META-INF/taglib.tld=0A= 2015-02-21 17:54:58,185 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/mockito-all-1.= 8.5.jar=0A= 2015-02-21 17:54:58,186 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jettison-1.1.j= ar=0A= 2015-02-21 17:54:58,188 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/stax-api-1.0-2= .jar=0A= 2015-02-21 17:54:58,188 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-collec= tions-3.2.1.jar=0A= 2015-02-21 17:54:58,188 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-beanut= ils-core-1.8.0.jar=0A= 2015-02-21 17:54:58,189 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/activation-1.1= .jar=0A= 2015-02-21 17:54:58,189 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/xz-1.0.jar=0A= 2015-02-21 17:54:58,189 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/curator-framew= ork-2.6.0.jar=0A= 2015-02-21 17:54:58,189 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-lang-2= .6.jar=0A= 2015-02-21 17:54:58,190 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-cli-1.= 2.jar=0A= 2015-02-21 17:54:58,190 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-digest= er-1.8.jar=0A= 2015-02-21 17:54:58,190 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jersey-json-1.= 9.jar=0A= 2015-02-21 17:54:58,190 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-math3-= 3.1.1.jar=0A= 2015-02-21 17:54:58,192 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/asm-3.2.jar=0A= 2015-02-21 17:54:58,192 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-httpcl= ient-3.1.jar=0A= 2015-02-21 17:54:58,192 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/gson-2.2.4.jar=0A= 2015-02-21 17:54:58,193 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/api-asn1-api-1= .0.0-M20.jar=0A= 2015-02-21 17:54:58,193 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/apacheds-i18n-= 2.0.0-M15.jar=0A= 2015-02-21 17:54:58,193 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-net-3.= 1.jar=0A= 2015-02-21 17:54:58,193 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/zookeeper-3.4.= 5-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,194 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-beanut= ils-1.7.0.jar=0A= 2015-02-21 17:54:58,195 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/java-xmlbuilde= r-0.4.jar=0A= 2015-02-21 17:54:58,195 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/xmlenc-0.52.ja= r=0A= 2015-02-21 17:54:58,195 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-codec-= 1.4.jar=0A= 2015-02-21 17:54:58,195 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-compre= ss-1.4.1.jar=0A= 2015-02-21 17:54:58,196 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/paranamer-2.3.= jar=0A= 2015-02-21 17:54:58,196 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hue-plugins-3.= 7.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,198 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jaxb-impl-2.2.= 3-1.jar=0A= 2015-02-21 17:54:58,199 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jaxb-api-2.2.2= .jar=0A= 2015-02-21 17:54:58,199 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsch-0.1.42.ja= r=0A= 2015-02-21 17:54:58,199 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jackson-jaxrs-= 1.8.8.jar=0A= 2015-02-21 17:54:58,199 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jackson-mapper= -asl-1.8.8.jar=0A= 2015-02-21 17:54:58,200 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/protobuf-java-= 2.5.0.jar=0A= 2015-02-21 17:54:58,200 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-hdfs-2.= 5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,205 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-hdfs-nf= s-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,205 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-hdfs-2.= 5.0-cdh5.3.0-tests.jar=0A= 2015-02-21 17:54:58,207 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-daemon= -1.0.13.jar=0A= 2015-02-21 17:54:58,207 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-co= mmon-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,208 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-ap= plications-distributedshell-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,208 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-se= rver-web-proxy-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,208 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-se= rver-nodemanager-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,209 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-se= rver-tests-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,209 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-se= rver-resourcemanager-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,210 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-se= rver-common-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,210 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-ap= i-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,211 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-se= rver-applicationhistoryservice-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,211 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-cl= ient-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,212 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-ap= plications-unmanaged-am-launcher-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,212 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/guice-3.0.jar=0A= 2015-02-21 17:54:58,213 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/aopalliance-1.= 0.jar=0A= 2015-02-21 17:54:58,213 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jersey-guice-1= .9.jar=0A= 2015-02-21 17:54:58,213 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jersey-client-= 1.9.jar=0A= 2015-02-21 17:54:58,213 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/leveldbjni-all= -1.8.jar=0A= 2015-02-21 17:54:58,214 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jline-0.9.94.j= ar=0A= 2015-02-21 17:54:58,214 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/javax.inject-1= .jar=0A= 2015-02-21 17:54:58,214 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/guice-servlet-= 3.0.jar=0A= 2015-02-21 17:54:58,214 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/microsoft-wind= owsazure-storage-sdk-0.6.0.jar=0A= 2015-02-21 17:54:58,215 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-jobclient-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,215 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-shuffle-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,215 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-rumen-2= .5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,216 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-hs-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,216 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-azure-2= .5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,216 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-hs-plugins-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,216 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-nativetask-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,216 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-sls-2.5= .0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,217 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-examples-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,217 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-common-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,218 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-app-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,218 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-gridmix= -2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,219 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/metrics-core-3= .0.1.jar=0A= 2015-02-21 17:54:58,219 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-archive= s-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,219 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-extras-= 2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,219 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jackson-annota= tions-2.2.3.jar=0A= 2015-02-21 17:54:58,219 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-core-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,221 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/joda-time-1.6.= jar=0A= 2015-02-21 17:54:58,222 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-datajoi= n-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,222 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-jobclient-2.5.0-cdh5.3.0-tests.jar=0A= 2015-02-21 17:54:58,223 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-distcp-= 2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,223 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jackson-core-2= .2.3.jar=0A= 2015-02-21 17:54:58,223 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-streami= ng-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 17:54:58,224 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jackson-databi= nd-2.2.3.jar=0A= 2015-02-21 17:54:58,225 DEBUG [main] org.mortbay.log: TLD search of = file:/data/yarn/nm/usercache/cloudera/appcache/application_1424550134651_= 0001/filecache/10/job.jar/job.jar=0A= 2015-02-21 17:54:58,225 DEBUG [main] org.mortbay.log: TLD search of = file:/usr/java/jdk1.7.0_67/jre/lib/ext/sunec.jar=0A= 2015-02-21 17:54:58,225 DEBUG [main] org.mortbay.log: TLD search of = file:/usr/java/jdk1.7.0_67/jre/lib/ext/zipfs.jar=0A= 2015-02-21 17:54:58,226 DEBUG [main] org.mortbay.log: TLD search of = file:/usr/java/jdk1.7.0_67/jre/lib/ext/localedata.jar=0A= 2015-02-21 17:54:58,227 DEBUG [main] org.mortbay.log: loaded class = com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl from null=0A= 2015-02-21 17:54:58,227 DEBUG [main] org.mortbay.log: loaded class = com.sun.org.apache.xerces.internal.impl.dv.dtd.DTDDVFactoryImpl from null=0A= 2015-02-21 17:54:58,228 DEBUG [main] org.mortbay.log: = getResource(javax/servlet/jsp/resources/web-jsptaglibrary_1_1.dtd)=3Djar:= file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsp-api-2.1.ja= r!/javax/servlet/jsp/resources/web-jsptaglibrary_1_1.dtd=0A= 2015-02-21 17:54:58,228 DEBUG [main] org.mortbay.log: = getResource(javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd)=3Djar:= file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsp-api-2.1.ja= r!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd=0A= 2015-02-21 17:54:58,228 DEBUG [main] org.mortbay.log: = getResource(javax/servlet/jsp/resources/web-jsptaglibrary_2_0.xsd)=3Djar:= file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsp-api-2.1.ja= r!/javax/servlet/jsp/resources/web-jsptaglibrary_2_0.xsd=0A= 2015-02-21 17:54:58,228 DEBUG [main] org.mortbay.log: = getResource(javax/servlet/jsp/resources/web-jsptaglibrary_1_1.dtd)=3Djar:= file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsp-api-2.1.ja= r!/javax/servlet/jsp/resources/web-jsptaglibrary_1_1.dtd=0A= 2015-02-21 17:54:58,228 DEBUG [main] org.mortbay.log: = getResource(javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd)=3Djar:= file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsp-api-2.1.ja= r!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd=0A= 2015-02-21 17:54:58,229 DEBUG [main] org.mortbay.log: = getResource(javax/servlet/jsp/resources/web-jsptaglibrary_2_0.xsd)=3Djar:= file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsp-api-2.1.ja= r!/javax/servlet/jsp/resources/web-jsptaglibrary_2_0.xsd=0A= 2015-02-21 17:54:58,229 DEBUG [main] org.mortbay.log: = TLD=3Djar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jers= ey-server-1.9.jar!/META-INF/taglib.tld=0A= 2015-02-21 17:54:58,232 DEBUG [main] org.mortbay.log: = resolveEntity(-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.2//EN, = http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd)=0A= 2015-02-21 17:54:58,232 DEBUG [main] org.mortbay.log: Can't exact match = entity in redirect map, trying web-jsptaglibrary_1_2.dtd=0A= 2015-02-21 17:54:58,232 DEBUG [main] org.mortbay.log: Redirected entity = http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd --> = jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsp-api-2.= 1.jar!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd=0A= 2015-02-21 17:54:58,237 DEBUG [main] org.mortbay.log: Container = Server@541821e6 + = org.mortbay.jetty.servlet.HashSessionIdManager@1991a218 as = sessionIdManager=0A= 2015-02-21 17:54:58,237 DEBUG [main] org.mortbay.log: Init SecureRandom.=0A= 2015-02-21 17:54:58,238 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.servlet.HashSessionIdManager@1991a218=0A= 2015-02-21 17:54:58,238 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.servlet.HashSessionManager@6911a11b=0A= 2015-02-21 17:54:58,238 DEBUG [main] org.mortbay.log: = filterNameMap=3D{guice=3Dguice, safety=3Dsafety, = NoCacheFilter=3DNoCacheFilter, AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:58,239 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15), (F=3DAM_PROXY_FILTER,[/conf],[],15), = (F=3DAM_PROXY_FILTER,[/mapreduce/*],[],15), = (F=3DAM_PROXY_FILTER,[/ws/*],[],15), (F=3Dguice,[/*],[],15)]=0A= 2015-02-21 17:54:58,239 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:58,239 DEBUG [main] org.mortbay.log: = servletPathMap=3D{*.XSP=3Djsp, *.jsp=3Djsp, *.jspx=3Djsp, *.JSPF=3Djsp, = /conf=3Dconf, /=3Ddefault, *.xsp=3Djsp, /stacks=3Dstacks, = /logLevel=3DlogLevel, *.JSPX=3Djsp, *.jspf=3Djsp, /metrics=3Dmetrics, = /jmx=3Djmx, *.JSP=3Djsp}=0A= 2015-02-21 17:54:58,239 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jsp=3Djsp, default=3Ddefault, jmx=3Djmx, = metrics=3Dmetrics, logLevel=3DlogLevel, conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 17:54:58,239 DEBUG [main] org.mortbay.log: starting = ServletHandler@60fd097b=0A= 2015-02-21 17:54:58,239 DEBUG [main] org.mortbay.log: started = ServletHandler@60fd097b=0A= 2015-02-21 17:54:58,239 DEBUG [main] org.mortbay.log: starting = SecurityHandler@4799bfc=0A= 2015-02-21 17:54:58,239 DEBUG [main] org.mortbay.log: started = SecurityHandler@4799bfc=0A= 2015-02-21 17:54:58,239 DEBUG [main] org.mortbay.log: starting = SessionHandler@4befbfaf=0A= 2015-02-21 17:54:58,239 DEBUG [main] org.mortbay.log: started = SessionHandler@4befbfaf=0A= 2015-02-21 17:54:58,239 DEBUG [main] org.mortbay.log: starting = org.mortbay.jetty.webapp.WebAppContext@23f3e3fd{/,jar:file:/opt/cloudera/= parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.0= .jar!/webapps/mapreduce}=0A= 2015-02-21 17:54:58,239 DEBUG [main] org.mortbay.log: starting = ErrorPageErrorHandler@4682981=0A= 2015-02-21 17:54:58,239 DEBUG [main] org.mortbay.log: started = ErrorPageErrorHandler@4682981=0A= 2015-02-21 17:54:58,239 DEBUG [main] org.mortbay.log: loaded class = org.apache.hadoop.http.NoCacheFilter from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 17:54:58,239 DEBUG [main] org.mortbay.log: Holding class = org.apache.hadoop.http.NoCacheFilter=0A= 2015-02-21 17:54:58,240 DEBUG [main] org.mortbay.log: started = NoCacheFilter=0A= 2015-02-21 17:54:58,241 DEBUG [main] org.mortbay.log: loaded class = org.apache.hadoop.http.HttpServer2$QuotingInputFilter from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 17:54:58,241 DEBUG [main] org.mortbay.log: Holding class = org.apache.hadoop.http.HttpServer2$QuotingInputFilter=0A= 2015-02-21 17:54:58,242 DEBUG [main] org.mortbay.log: started safety=0A= 2015-02-21 17:54:58,242 DEBUG [main] org.mortbay.log: loaded class = org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 17:54:58,242 DEBUG [main] org.mortbay.log: Holding class = org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter=0A= 2015-02-21 17:54:58,244 DEBUG [main] org.mortbay.log: loaded class = org.apache.commons.logging.impl.Log4JLogger=0A= 2015-02-21 17:54:58,244 DEBUG [main] org.mortbay.log: loaded class = org.apache.commons.logging.impl.Log4JLogger from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 17:54:58,245 DEBUG [main] org.mortbay.log: started = AM_PROXY_FILTER=0A= 2015-02-21 17:54:58,245 DEBUG [main] org.mortbay.log: loaded class = com.google.inject.servlet.GuiceFilter from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 17:54:58,245 DEBUG [main] org.mortbay.log: Holding class = com.google.inject.servlet.GuiceFilter=0A= 2015-02-21 17:54:58,247 DEBUG [main] org.mortbay.log: started guice=0A= 2015-02-21 17:54:58,248 DEBUG [main] org.mortbay.log: started conf=0A= 2015-02-21 17:54:58,248 DEBUG [main] org.mortbay.log: started stacks=0A= 2015-02-21 17:54:58,248 DEBUG [main] org.mortbay.log: started jmx=0A= 2015-02-21 17:54:58,248 DEBUG [main] org.mortbay.log: started logLevel=0A= 2015-02-21 17:54:58,248 DEBUG [main] org.mortbay.log: started metrics=0A= 2015-02-21 17:54:58,248 DEBUG [main] org.mortbay.log: loaded class = org.apache.jasper.servlet.JspServlet from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 17:54:58,248 DEBUG [main] org.mortbay.log: Holding class = org.apache.jasper.servlet.JspServlet=0A= 2015-02-21 17:54:58,290 DEBUG [main] = org.apache.jasper.compiler.JspRuntimeContext: Parent class loader is: = ContextLoader@mapreduce([]) / sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 17:54:58,291 DEBUG [main] = org.apache.jasper.compiler.JspRuntimeContext: Compilation classpath = initialized: /tmp/Jetty_0_0_0_0_40686_mapreduce____40v6rl/jsp:null=0A= 2015-02-21 17:54:58,295 DEBUG [main] = org.apache.jasper.servlet.JspServlet: Scratch dir for the JSP engine is: = /tmp/Jetty_0_0_0_0_40686_mapreduce____40v6rl/jsp=0A= 2015-02-21 17:54:58,295 DEBUG [main] = org.apache.jasper.servlet.JspServlet: IMPORTANT: Do not modify the = generated servlets=0A= 2015-02-21 17:54:58,295 DEBUG [main] org.mortbay.log: started jsp=0A= 2015-02-21 17:54:58,295 DEBUG [main] org.mortbay.log: loaded class = org.mortbay.jetty.servlet.DefaultServlet=0A= 2015-02-21 17:54:58,295 DEBUG [main] org.mortbay.log: loaded class = org.mortbay.jetty.servlet.DefaultServlet from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 17:54:58,295 DEBUG [main] org.mortbay.log: Holding class = org.mortbay.jetty.servlet.DefaultServlet=0A= 2015-02-21 17:54:58,304 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@fcb1408=0A= 2015-02-21 17:54:58,304 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.ResourceCache@5d902151=0A= 2015-02-21 17:54:58,304 DEBUG [main] org.mortbay.log: resource base =3D = file:/tmp/Jetty_0_0_0_0_40686_mapreduce____40v6rl/webapp/=0A= 2015-02-21 17:54:58,304 DEBUG [main] org.mortbay.log: started default=0A= 2015-02-21 17:54:58,304 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.webapp.WebAppContext@23f3e3fd{/,jar:file:/opt/cloudera/= parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.0= .jar!/webapps/mapreduce}=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: Container = org.mortbay.jetty.servlet.Context@527cd669{/static,jar:file:/opt/cloudera= /parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.= 0.jar!/webapps/static} + ErrorHandler@217b7cd4 as errorHandler=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3Dsafety,[/*],[],15), (F=3DAM_PROXY_FILTER,[/*],[],15)]=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/*=3Dorg.mortbay.jetty.servlet.DefaultServlet-118467687= 7}=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: = servletNameMap=3D{org.mortbay.jetty.servlet.DefaultServlet-1184676877=3Do= rg.mortbay.jetty.servlet.DefaultServlet-1184676877}=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: starting = ServletHandler@1e0b1ce=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: started = ServletHandler@1e0b1ce=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: starting = org.mortbay.jetty.servlet.Context@527cd669{/static,jar:file:/opt/cloudera= /parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.= 0.jar!/webapps/static}=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: starting = ErrorHandler@217b7cd4=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: started = ErrorHandler@217b7cd4=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: Holding class = org.apache.hadoop.http.HttpServer2$QuotingInputFilter=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: started safety=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: Holding class = org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: started = AM_PROXY_FILTER=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: Holding class = org.mortbay.jetty.servlet.DefaultServlet=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.servlet.DefaultServlet-1184676877=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.servlet.Context@527cd669{/static,jar:file:/opt/cloudera= /parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.= 0.jar!/webapps/static}=0A= 2015-02-21 17:54:58,305 DEBUG [main] org.mortbay.log: starting = ContextHandlerCollection@ffaf13d=0A= 2015-02-21 17:54:58,306 DEBUG [main] org.mortbay.log: started = ContextHandlerCollection@ffaf13d=0A= 2015-02-21 17:54:58,306 DEBUG [main] org.mortbay.log: starting = Server@541821e6=0A= 2015-02-21 17:54:58,310 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.nio.SelectChannelConnector$1@fdbf8f6=0A= 2015-02-21 17:54:58,313 INFO [main] org.mortbay.log: Started = HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:40686=0A= 2015-02-21 17:54:58,313 DEBUG [main] org.mortbay.log: started = HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:40686=0A= 2015-02-21 17:54:58,313 DEBUG [main] org.mortbay.log: started = Server@541821e6=0A= 2015-02-21 17:54:58,313 INFO [main] = org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce started at = 40686=0A= 2015-02-21 17:54:58,471 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /([])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#index=0A= 2015-02-21 17:54:58,474 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,482 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,483 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /app([])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#index=0A= 2015-02-21 17:54:58,483 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,483 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,483 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /job([:job.id])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#job=0A= 2015-02-21 17:54:58,484 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,484 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,484 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /conf([:job.id])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#conf=0A= 2015-02-21 17:54:58,484 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,484 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,484 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding = /jobcounters([:job.id])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#jobCounters=0A= 2015-02-21 17:54:58,484 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,484 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,484 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /singlejobcounter([:job.id, = :counter.group, :counter.name])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#singleJobCounter=0A= 2015-02-21 17:54:58,485 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,485 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,485 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /tasks([:job.id, = :task.type, :task.state])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#tasks=0A= 2015-02-21 17:54:58,485 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,485 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,486 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /attempts([:job.id, = :task.type, :attempt.state])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#attempts=0A= 2015-02-21 17:54:58,486 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,486 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,486 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /task([:task.id])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#task=0A= 2015-02-21 17:54:58,486 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,486 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,486 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding = /taskcounters([:task.id])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#taskCounters=0A= 2015-02-21 17:54:58,486 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,486 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,486 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding = /singletaskcounter([:task.id, :counter.group, :counter.name])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#singleTaskCounter=0A= 2015-02-21 17:54:58,486 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,486 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 17:54:58,703 INFO [main] = org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules=0A= 2015-02-21 17:54:58,704 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapreduce.v2.app.client.MRClientService is started=0A= 2015-02-21 17:54:58,704 DEBUG [main] = org.apache.hadoop.service.CompositeService: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: starting services, = size=3D7=0A= 2015-02-21 17:54:58,705 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service Dispatcher is started=0A= 2015-02-21 17:54:58,706 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = AM_STARTED=0A= 2015-02-21 17:54:58,707 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = JOB_SUBMITTED=0A= 2015-02-21 17:54:58,707 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = JOB_CREATE=0A= 2015-02-21 17:54:58,707 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service CommitterEventHandler = is started=0A= 2015-02-21 17:54:58,709 DEBUG [main] org.apache.hadoop.ipc.Server: = rpcKind=3DRPC_WRITABLE, rpcRequestWrapperClass=3Dclass = org.apache.hadoop.ipc.WritableRpcEngine$Invocation, = rpcInvoker=3Dorg.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcIn= voker@3fd87157=0A= 2015-02-21 17:54:58,709 INFO [main] = org.apache.hadoop.ipc.CallQueueManager: Using callQueue class = java.util.concurrent.LinkedBlockingQueue=0A= 2015-02-21 17:54:58,709 DEBUG [main] org.apache.hadoop.ipc.Server: TOKEN = authentication enabled for secret manager=0A= 2015-02-21 17:54:58,709 DEBUG [main] org.apache.hadoop.ipc.Server: = Server accepts auth methods:[TOKEN, SIMPLE]=0A= 2015-02-21 17:54:58,710 INFO [Socket Reader #1 for port 50483] = org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50483=0A= 2015-02-21 17:54:58,711 DEBUG [main] = org.apache.hadoop.ipc.metrics.RpcMetrics: Initialized = MetricsRegistry{info=3DMetricsInfoImpl{name=3Drpc, description=3Drpc}, = tags=3D[MetricsTag{info=3DMetricsInfoImpl{name=3Dport, description=3DRPC = port}, value=3D50483}], metrics=3D[]}=0A= 2015-02-21 17:54:58,711 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterLong = org.apache.hadoop.ipc.metrics.RpcMetrics.receivedBytes with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of received bytes], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:58,711 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterLong = org.apache.hadoop.ipc.metrics.RpcMetrics.sentBytes with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of sent bytes], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:58,711 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRate = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcQueueTime with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Queue time], about=3D, type=3DDEFAULT, always=3Dfalse, = sampleName=3DOps)=0A= 2015-02-21 17:54:58,712 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRate = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcProcessingTime with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Processsing time], about=3D, type=3DDEFAULT, always=3Dfalse, = sampleName=3DOps)=0A= 2015-02-21 17:54:58,712 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthenticationFailures with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authentication failures], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:58,712 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthenticationSuccesses with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authentication successes], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:58,712 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthorizationFailures with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authorization failures], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:58,712 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthorizationSuccesses with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authorization sucesses], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:58,712 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: method public int = org.apache.hadoop.ipc.metrics.RpcMetrics.numOpenConnections() with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of open connections], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:58,713 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: method public int = org.apache.hadoop.ipc.metrics.RpcMetrics.callQueueLength() with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Length of the call queue], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:58,713 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = RpcActivityForPort50483, Aggregate RPC metrics=0A= 2015-02-21 17:54:58,713 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 17:54:58,713 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 17:54:58,713 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 17:54:58,713 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 17:54:58,713 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D15=0A= 2015-02-21 17:54:58,713 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 17:54:58,713 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DRPC port, = name=3Dtag.port, type=3Djava.lang.String, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLocal hostname, = name=3Dtag.Hostname, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of received = bytes, name=3DReceivedBytes, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of sent bytes, = name=3DSentBytes, type=3Djava.lang.Long, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for = queue time, name=3DRpcQueueTimeNumOps, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for queue = time, name=3DRpcQueueTimeAvgTime, type=3Djava.lang.Double, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for = processsing time, name=3DRpcProcessingTimeNumOps, type=3Djava.lang.Long, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for = processsing time, name=3DRpcProcessingTimeAvgTime, = type=3Djava.lang.Double, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authentication failures, name=3DRpcAuthenticationFailures, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authentication successes, name=3DRpcAuthenticationSuccesses, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authorization failures, name=3DRpcAuthorizationFailures, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authorization sucesses, name=3DRpcAuthorizationSuccesses, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of open = connections, name=3DNumOpenConnections, type=3Djava.lang.Integer, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLength of the call = queue, name=3DCallQueueLength, type=3Djava.lang.Integer, read-only, = descriptor=3D{}]]=0A= 2015-02-21 17:54:58,713 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 17:54:58,713 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DRpcActivityForPort50483=0A= 2015-02-21 17:54:58,713 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = RpcActivityForPort50483 registered.=0A= 2015-02-21 17:54:58,713 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Registered source = RpcActivityForPort50483=0A= 2015-02-21 17:54:58,714 DEBUG [main] = org.apache.hadoop.ipc.metrics.RpcDetailedMetrics: = MetricsInfoImpl{name=3Drpcdetailed, description=3Drpcdetailed}=0A= 2015-02-21 17:54:58,714 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRates = org.apache.hadoop.ipc.metrics.RpcDetailedMetrics.rates with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 17:54:58,714 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = RpcDetailedActivityForPort50483, Per method RPC metrics=0A= 2015-02-21 17:54:58,714 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 17:54:58,714 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 17:54:58,714 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 17:54:58,714 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 17:54:58,714 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D3=0A= 2015-02-21 17:54:58,714 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 17:54:58,714 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DRPC port, = name=3Dtag.port, type=3Djava.lang.String, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLocal hostname, = name=3Dtag.Hostname, type=3Djava.lang.String, read-only, = descriptor=3D{}]]=0A= 2015-02-21 17:54:58,714 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 17:54:58,714 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DRpcDetailedActivityForPort50483=0A= 2015-02-21 17:54:58,714 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = RpcDetailedActivityForPort50483 registered.=0A= 2015-02-21 17:54:58,714 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Registered source = RpcDetailedActivityForPort50483=0A= 2015-02-21 17:54:58,715 DEBUG [main] org.apache.hadoop.ipc.Server: = RpcKind =3D RPC_PROTOCOL_BUFFER Protocol Name =3D = org.apache.hadoop.ipc.ProtocolMetaInfoPB version=3D1 = ProtocolImpl=3Dorg.apache.hadoop.ipc.protobuf.ProtocolInfoProtos$Protocol= InfoService$2 protocolClass=3Dorg.apache.hadoop.ipc.ProtocolMetaInfoPB=0A= 2015-02-21 17:54:58,716 DEBUG [main] org.apache.hadoop.ipc.Server: = RpcKind =3D RPC_WRITABLE Protocol Name =3D = org.apache.hadoop.mapred.TaskUmbilicalProtocol version=3D19 = ProtocolImpl=3Dorg.apache.hadoop.mapred.TaskAttemptListenerImpl = protocolClass=3Dorg.apache.hadoop.mapred.TaskUmbilicalProtocol=0A= 2015-02-21 17:54:58,718 INFO [IPC Server Responder] = org.apache.hadoop.ipc.Server: IPC Server Responder: starting=0A= 2015-02-21 17:54:58,718 DEBUG [IPC Server handler 0 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50483: starting=0A= 2015-02-21 17:54:58,718 INFO [IPC Server listener on 50483] = org.apache.hadoop.ipc.Server: IPC Server listener on 50483: starting=0A= 2015-02-21 17:54:58,720 DEBUG [IPC Server handler 2 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50483: starting=0A= 2015-02-21 17:54:58,720 DEBUG [IPC Server handler 1 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50483: starting=0A= 2015-02-21 17:54:58,720 DEBUG [IPC Server handler 3 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 3 on 50483: starting=0A= 2015-02-21 17:54:58,720 DEBUG [IPC Server handler 5 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 5 on 50483: starting=0A= 2015-02-21 17:54:58,721 DEBUG [IPC Server handler 6 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 50483: starting=0A= 2015-02-21 17:54:58,721 DEBUG [IPC Server handler 7 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 7 on 50483: starting=0A= 2015-02-21 17:54:58,721 DEBUG [IPC Server handler 8 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 50483: starting=0A= 2015-02-21 17:54:58,721 DEBUG [IPC Server handler 9 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 9 on 50483: starting=0A= 2015-02-21 17:54:58,722 DEBUG [IPC Server handler 10 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 10 on 50483: starting=0A= 2015-02-21 17:54:58,722 DEBUG [IPC Server handler 4 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 50483: starting=0A= 2015-02-21 17:54:58,722 DEBUG [IPC Server handler 11 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 50483: starting=0A= 2015-02-21 17:54:58,722 DEBUG [IPC Server handler 12 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 12 on 50483: starting=0A= 2015-02-21 17:54:58,722 DEBUG [IPC Server handler 13 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 13 on 50483: starting=0A= 2015-02-21 17:54:58,722 DEBUG [IPC Server handler 14 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 14 on 50483: starting=0A= 2015-02-21 17:54:58,722 DEBUG [IPC Server handler 15 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 15 on 50483: starting=0A= 2015-02-21 17:54:58,722 DEBUG [IPC Server handler 17 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 17 on 50483: starting=0A= 2015-02-21 17:54:58,722 DEBUG [IPC Server handler 16 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 16 on 50483: starting=0A= 2015-02-21 17:54:58,723 DEBUG [IPC Server handler 19 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 19 on 50483: starting=0A= 2015-02-21 17:54:58,723 DEBUG [IPC Server handler 18 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 18 on 50483: starting=0A= 2015-02-21 17:54:58,723 DEBUG [IPC Server handler 20 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 20 on 50483: starting=0A= 2015-02-21 17:54:58,723 DEBUG [IPC Server handler 22 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 22 on 50483: starting=0A= 2015-02-21 17:54:58,723 DEBUG [IPC Server handler 21 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 21 on 50483: starting=0A= 2015-02-21 17:54:58,723 DEBUG [IPC Server handler 25 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 25 on 50483: starting=0A= 2015-02-21 17:54:58,723 DEBUG [IPC Server handler 24 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 24 on 50483: starting=0A= 2015-02-21 17:54:58,723 DEBUG [IPC Server handler 23 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 50483: starting=0A= 2015-02-21 17:54:58,724 DEBUG [IPC Server handler 26 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 26 on 50483: starting=0A= 2015-02-21 17:54:58,724 DEBUG [IPC Server handler 27 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 27 on 50483: starting=0A= 2015-02-21 17:54:58,724 DEBUG [IPC Server handler 28 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 50483: starting=0A= 2015-02-21 17:54:58,724 DEBUG [main] = org.apache.hadoop.service.CompositeService: = org.apache.hadoop.mapred.TaskAttemptListenerImpl: starting services, = size=3D1=0A= 2015-02-21 17:54:58,725 DEBUG [IPC Server handler 29 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 50483: starting=0A= 2015-02-21 17:54:58,725 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service TaskHeartbeatHandler = is started=0A= 2015-02-21 17:54:58,725 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapred.TaskAttemptListenerImpl is started=0A= 2015-02-21 17:54:58,725 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$StagingDirCleaningService = is started=0A= 2015-02-21 17:54:58,741 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: RMCommunicator = entered state INITED=0A= 2015-02-21 17:54:58,741 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = nodeBlacklistingEnabled:true=0A= 2015-02-21 17:54:58,741 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = maxTaskFailuresPerNode is 3=0A= 2015-02-21 17:54:58,741 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = blacklistDisablePercent is 33=0A= 2015-02-21 17:54:58,856 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.require.client.cert; Ignoring.=0A= 2015-02-21 17:54:58,857 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.retry.interval; = Ignoring.=0A= 2015-02-21 17:54:58,858 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.client.conf; Ignoring.=0A= 2015-02-21 17:54:58,868 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.keystores.factory.class; Ignoring.=0A= 2015-02-21 17:54:58,869 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.server.conf; Ignoring.=0A= 2015-02-21 17:54:58,875 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.=0A= 2015-02-21 17:54:58,879 INFO [main] = org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at = quickstart.cloudera/192.168.2.185:8030=0A= 2015-02-21 17:54:58,888 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.RMProxy.getProxy(RMProxy.java:136)=0A= 2015-02-21 17:54:58,888 DEBUG [main] org.apache.hadoop.yarn.ipc.YarnRPC: = Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC=0A= 2015-02-21 17:54:58,888 DEBUG [main] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ApplicationMasterProtocol=0A= 2015-02-21 17:54:58,937 DEBUG [main] org.apache.hadoop.ipc.Client: = getting client out of cache: org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 17:54:59,001 DEBUG [main] org.apache.hadoop.ipc.Client: The = ping interval is 60000 ms.=0A= 2015-02-21 17:54:59,002 DEBUG [main] org.apache.hadoop.ipc.Client: = Connecting to quickstart.cloudera/192.168.2.185:8030=0A= 2015-02-21 17:54:59,005 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 17:54:59,005 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 17:54:59,049 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"JP1FdRqZpK7TkUIOEbppp5YH4EhPKzZ/HLsr1Wjb\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 17:54:59,061 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB = info:org.apache.hadoop.yarn.security.SchedulerSecurityInfo$1@26c13868=0A= 2015-02-21 17:54:59,062 DEBUG [main] = org.apache.hadoop.yarn.security.AMRMTokenSelector: Looking for a token = with service 192.168.2.185:8030=0A= 2015-02-21 17:54:59,063 DEBUG [main] = org.apache.hadoop.yarn.security.AMRMTokenSelector: Token kind is = YARN_AM_RM_TOKEN and the token's service name is 192.168.2.185:8030=0A= 2015-02-21 17:54:59,071 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 17:54:59,074 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ApplicationMasterProtocolPB=0A= 2015-02-21 17:54:59,081 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: AAABS63OA3sAAAABAAAAAV8/5o8=3D=0A= 2015-02-21 17:54:59,081 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 17:54:59,081 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 17:54:59,083 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAABAAAAAV8/5o8=3D\",realm=3D\"= default\",nonce=3D\"JP1FdRqZpK7TkUIOEbppp5YH4EhPKzZ/HLsr1Wjb\",nc=3D00000= 001,cnonce=3D\"D8teuBtuFWuryKopW7NGZo3cEKxEcedwVMV/js0H\",digest-uri=3D\"= /default\",maxbuf=3D65536,response=3D7db4bef5221ba40af6f3ccdcd1f478c6,qop= =3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 17:54:59,105 WARN [main] = org.apache.hadoop.security.UserGroupInformation: = PriviledgedActionException as:cloudera (auth:SIMPLE) = cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.to= ken.SecretManager$InvalidToken): appattempt_1424550134651_0001_000001 = not found in AMRMTokenSecretManager.=0A= 2015-02-21 17:54:59,106 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(= Client.java:642)=0A= 2015-02-21 17:54:59,107 WARN [main] org.apache.hadoop.ipc.Client: = Exception encountered while connecting to the server : = org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se= cretManager$InvalidToken): appattempt_1424550134651_0001_000001 not = found in AMRMTokenSecretManager.=0A= 2015-02-21 17:54:59,107 WARN [main] = org.apache.hadoop.security.UserGroupInformation: = PriviledgedActionException as:cloudera (auth:SIMPLE) = cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.to= ken.SecretManager$InvalidToken): appattempt_1424550134651_0001_000001 = not found in AMRMTokenSecretManager.=0A= 2015-02-21 17:54:59,107 DEBUG [main] org.apache.hadoop.ipc.Client: = closing ipc connection to quickstart.cloudera/192.168.2.185:8030: = appattempt_1424550134651_0001_000001 not found in AMRMTokenSecretManager.=0A= org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se= cretManager$InvalidToken): appattempt_1424550134651_0001_000001 not = found in AMRMTokenSecretManager.=0A= at = org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:3= 75)=0A= at = org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:5= 52)=0A= at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:367)=0A= at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:717)=0A= at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:713)=0A= at java.security.AccessController.doPrivileged(Native Method)=0A= at javax.security.auth.Subject.doAs(Subject.java:415)=0A= at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1642)=0A= at = org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)=0A= at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)=0A= at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)=0A= at org.apache.hadoop.ipc.Client.call(Client.java:1382)=0A= at org.apache.hadoop.ipc.Client.call(Client.java:1364)=0A= at = org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.= java:206)=0A= at com.sun.proxy.$Proxy35.registerApplicationMaster(Unknown Source)=0A= at = org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja= va:106)=0A= at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)=0A= at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :57)=0A= at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:43)=0A= at java.lang.reflect.Method.invoke(Method.java:606)=0A= at = org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc= ationHandler.java:187)=0A= at = org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH= andler.java:102)=0A= at com.sun.proxy.$Proxy36.registerApplicationMaster(Unknown Source)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunica= tor.java:161)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommu= nicator.java:122)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(R= MContainerAllocator.java:238)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.s= erviceStart(MRAppMaster.java:807)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.= java:120)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.j= ava:1075)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:147= 8)=0A= at java.security.AccessController.doPrivileged(Native Method)=0A= at javax.security.auth.Subject.doAs(Subject.java:415)=0A= at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1642)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAp= pMaster.java:1474)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1407= )=0A= 2015-02-21 17:54:59,108 DEBUG [main] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to quickstart.cloudera/192.168.2.185:8030 = from cloudera: closed=0A= 2015-02-21 17:54:59,114 ERROR [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Exception = while registering=0A= org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424550134651_0001_000001 not found in AMRMTokenSecretManager.=0A= at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)=0A= at = sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc= cessorImpl.java:57)=0A= at = sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst= ructorAccessorImpl.java:45)=0A= at java.lang.reflect.Constructor.newInstance(Constructor.java:526)=0A= at = org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)=0A= at = org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:1= 04)=0A= at = org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja= va:109)=0A= at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)=0A= at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :57)=0A= at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:43)=0A= at java.lang.reflect.Method.invoke(Method.java:606)=0A= at = org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc= ationHandler.java:187)=0A= at = org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH= andler.java:102)=0A= at com.sun.proxy.$Proxy36.registerApplicationMaster(Unknown Source)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunica= tor.java:161)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommu= nicator.java:122)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(R= MContainerAllocator.java:238)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.s= erviceStart(MRAppMaster.java:807)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.= java:120)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.j= ava:1075)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:147= 8)=0A= at java.security.AccessController.doPrivileged(Native Method)=0A= at javax.security.auth.Subject.doAs(Subject.java:415)=0A= at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1642)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAp= pMaster.java:1474)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1407= )=0A= Caused by: = org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se= cretManager$InvalidToken): appattempt_1424550134651_0001_000001 not = found in AMRMTokenSecretManager.=0A= at org.apache.hadoop.ipc.Client.call(Client.java:1411)=0A= at org.apache.hadoop.ipc.Client.call(Client.java:1364)=0A= at = org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.= java:206)=0A= at com.sun.proxy.$Proxy35.registerApplicationMaster(Unknown Source)=0A= at = org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja= va:106)=0A= ... 22 more=0A= 2015-02-21 17:54:59,115 DEBUG [main] = org.apache.hadoop.service.AbstractService: noteFailure = org.apache.hadoop.yarn.exceptions.YarnRuntimeException: = org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424550134651_0001_000001 not found in AMRMTokenSecretManager.=0A= 2015-02-21 17:54:59,115 INFO [main] = org.apache.hadoop.service.AbstractService: Service RMCommunicator failed = in state STARTED; cause: = org.apache.hadoop.yarn.exceptions.YarnRuntimeException: = org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424550134651_0001_000001 not found in AMRMTokenSecretManager.=0A= org.apache.hadoop.yarn.exceptions.YarnRuntimeException: = org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424550134651_0001_000001 not found in AMRMTokenSecretManager.=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunica= tor.java:178)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommu= nicator.java:122)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(R= MContainerAllocator.java:238)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.s= erviceStart(MRAppMaster.java:807)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.= java:120)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.j= ava:1075)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:147= 8)=0A= at java.security.AccessController.doPrivileged(Native Method)=0A= at javax.security.auth.Subject.doAs(Subject.java:415)=0A= at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1642)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAp= pMaster.java:1474)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1407= )=0A= Caused by: org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424550134651_0001_000001 not found in AMRMTokenSecretManager.=0A= at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)=0A= at = sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc= cessorImpl.java:57)=0A= at = sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst= ructorAccessorImpl.java:45)=0A= at java.lang.reflect.Constructor.newInstance(Constructor.java:526)=0A= at = org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)=0A= at = org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:1= 04)=0A= at = org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja= va:109)=0A= at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)=0A= at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :57)=0A= at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:43)=0A= at java.lang.reflect.Method.invoke(Method.java:606)=0A= at = org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc= ationHandler.java:187)=0A= at = org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH= andler.java:102)=0A= at com.sun.proxy.$Proxy36.registerApplicationMaster(Unknown Source)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunica= tor.java:161)=0A= ... 14 more=0A= Caused by: = org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se= cretManager$InvalidToken): appattempt_1424550134651_0001_000001 not = found in AMRMTokenSecretManager.=0A= at org.apache.hadoop.ipc.Client.call(Client.java:1411)=0A= at org.apache.hadoop.ipc.Client.call(Client.java:1364)=0A= at = org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.= java:206)=0A= at com.sun.proxy.$Proxy35.registerApplicationMaster(Unknown Source)=0A= at = org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja= va:106)=0A= ... 22 more=0A= 2015-02-21 17:54:59,117 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: RMCommunicator = entered state STOPPED=0A= 2015-02-21 17:54:59,123 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: = PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 = AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 = HostLocal:0 RackLocal:0=0A= 2015-02-21 17:54:59,124 DEBUG [main] = org.apache.hadoop.service.AbstractService: noteFailure = org.apache.hadoop.yarn.exceptions.YarnRuntimeException: = org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424550134651_0001_000001 not found in AMRMTokenSecretManager.=0A= 2015-02-21 17:54:59,124 INFO [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter = failed in state STARTED; cause: = org.apache.hadoop.yarn.exceptions.YarnRuntimeException: = org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424550134651_0001_000001 not found in AMRMTokenSecretManager.=0A= org.apache.hadoop.yarn.exceptions.YarnRuntimeException: = org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424550134651_0001_000001 not found in AMRMTokenSecretManager.=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunica= tor.java:178)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommu= nicator.java:122)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(R= MContainerAllocator.java:238)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.s= erviceStart(MRAppMaster.java:807)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.= java:120)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.j= ava:1075)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:147= 8)=0A= at java.security.AccessController.doPrivileged(Native Method)=0A= at javax.security.auth.Subject.doAs(Subject.java:415)=0A= at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1642)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAp= pMaster.java:1474)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1407= )=0A= Caused by: org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424550134651_0001_000001 not found in AMRMTokenSecretManager.=0A= at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)=0A= at = sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc= cessorImpl.java:57)=0A= at = sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst= ructorAccessorImpl.java:45)=0A= at java.lang.reflect.Constructor.newInstance(Constructor.java:526)=0A= at = org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)=0A= at = org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:1= 04)=0A= at = org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja= va:109)=0A= at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)=0A= at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :57)=0A= at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:43)=0A= at java.lang.reflect.Method.invoke(Method.java:606)=0A= at = org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc= ationHandler.java:187)=0A= at = org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH= andler.java:102)=0A= at com.sun.proxy.$Proxy36.registerApplicationMaster(Unknown Source)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunica= tor.java:161)=0A= ... 14 more=0A= Caused by: = org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se= cretManager$InvalidToken): appattempt_1424550134651_0001_000001 not = found in AMRMTokenSecretManager.=0A= at org.apache.hadoop.ipc.Client.call(Client.java:1411)=0A= at org.apache.hadoop.ipc.Client.call(Client.java:1364)=0A= at = org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.= java:206)=0A= at com.sun.proxy.$Proxy35.registerApplicationMaster(Unknown Source)=0A= at = org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja= va:106)=0A= ... 22 more=0A= 2015-02-21 17:54:59,124 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter = entered state STOPPED=0A= 2015-02-21 17:54:59,124 DEBUG [main] = org.apache.hadoop.service.AbstractService: noteFailure = org.apache.hadoop.yarn.exceptions.YarnRuntimeException: = org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424550134651_0001_000001 not found in AMRMTokenSecretManager.=0A= 2015-02-21 17:54:59,124 INFO [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state STARTED; = cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: = org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424550134651_0001_000001 not found in AMRMTokenSecretManager.=0A= org.apache.hadoop.yarn.exceptions.YarnRuntimeException: = org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424550134651_0001_000001 not found in AMRMTokenSecretManager.=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunica= tor.java:178)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommu= nicator.java:122)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(R= MContainerAllocator.java:238)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.s= erviceStart(MRAppMaster.java:807)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.= java:120)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.j= ava:1075)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:147= 8)=0A= at java.security.AccessController.doPrivileged(Native Method)=0A= at javax.security.auth.Subject.doAs(Subject.java:415)=0A= at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1642)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAp= pMaster.java:1474)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1407= )=0A= Caused by: org.apache.hadoop.security.token.SecretManager$InvalidToken: = appattempt_1424550134651_0001_000001 not found in AMRMTokenSecretManager.=0A= at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)=0A= at = sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAc= cessorImpl.java:57)=0A= at = sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConst= ructorAccessorImpl.java:45)=0A= at java.lang.reflect.Constructor.newInstance(Constructor.java:526)=0A= at = org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)=0A= at = org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:1= 04)=0A= at = org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja= va:109)=0A= at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)=0A= at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :57)=0A= at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:43)=0A= at java.lang.reflect.Method.invoke(Method.java:606)=0A= at = org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc= ationHandler.java:187)=0A= at = org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH= andler.java:102)=0A= at com.sun.proxy.$Proxy36.registerApplicationMaster(Unknown Source)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunica= tor.java:161)=0A= ... 14 more=0A= Caused by: = org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se= cretManager$InvalidToken): appattempt_1424550134651_0001_000001 not = found in AMRMTokenSecretManager.=0A= at org.apache.hadoop.ipc.Client.call(Client.java:1411)=0A= at org.apache.hadoop.ipc.Client.call(Client.java:1364)=0A= at = org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.= java:206)=0A= at com.sun.proxy.$Proxy35.registerApplicationMaster(Unknown Source)=0A= at = org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja= va:106)=0A= ... 22 more=0A= 2015-02-21 17:54:59,125 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster entered state STOPPED=0A= 2015-02-21 17:54:59,125 DEBUG [main] = org.apache.hadoop.service.CompositeService: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: stopping services, = size=3D7=0A= 2015-02-21 17:54:59,125 DEBUG [main] = org.apache.hadoop.service.CompositeService: Stopping service #6: Service = JobHistoryEventHandler in state JobHistoryEventHandler: INITED=0A= 2015-02-21 17:54:59,125 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = JobHistoryEventHandler entered state STOPPED=0A= 2015-02-21 17:54:59,125 INFO [main] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping = JobHistoryEventHandler. Size of the outstanding queue size is 2=0A= 2015-02-21 17:54:59,125 DEBUG [main] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Null = event handling thread=0A= 2015-02-21 17:54:59,126 INFO [main] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop, = writing event AM_STARTED=0A= 2015-02-21 17:54:59,133 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = /user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_0001_1.j= hist: masked=3Drw-r--r--=0A= 2015-02-21 17:54:59,145 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #11=0A= 2015-02-21 17:54:59,171 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #11=0A= 2015-02-21 17:54:59,171 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: create took 27ms=0A= 2015-02-21 17:54:59,174 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = computePacketChunkSize: = src=3D/user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_00= 01_1.jhist, chunkSize=3D516, chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 17:54:59,202 DEBUG = [LeaseRenewer:cloudera@hadoop0.rdpratti.com:8020] = org.apache.hadoop.hdfs.LeaseRenewer: Lease renewer daemon for = [DFSClient_NONMAPREDUCE_930231345_1] with renew id 1 started=0A= 2015-02-21 17:54:59,281 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = DFSClient writeChunk allocating new packet seqno=3D0, = src=3D/user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_00= 01_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, bytesCurBlock=3D0=0A= 2015-02-21 17:54:59,304 INFO [main] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event = Writer setup for JobId: job_1424550134651_0001, File: = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0001/job_1424550134651_0001_1.jhist=0A= 2015-02-21 17:54:59,304 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = /user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_0001_1_c= onf.xml: masked=3Drw-r--r--=0A= 2015-02-21 17:54:59,305 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #12=0A= 2015-02-21 17:54:59,315 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #12=0A= 2015-02-21 17:54:59,315 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: create took 11ms=0A= 2015-02-21 17:54:59,315 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = computePacketChunkSize: = src=3D/user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_00= 01_1_conf.xml, chunkSize=3D516, chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 17:54:59,317 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for all = properties in config...=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.data.dir=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.checkpoint.txns=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3.replication=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.output.fileoutputformat.compress.type=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.jobhistory.lru.cache.size=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.failed.volumes.tolerated=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.filter.initializers=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.cluster.temp.dir=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.keytab=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.memory.limit.percent=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.checkpoint.max-retries=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = nfs.mountd.port=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.zk-acl=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.skip.maxgroups=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.https.server.keystore.resource=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.task.container.log.backups=0A= 2015-02-21 17:54:59,318 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.domain.socket.path=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.authentication.kerberos.keytab=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.increment-allocation-mb=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.generic-application-history.store-class=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percen= tage=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.block.write.replace-datanode-on-failure.best-effort=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.jar=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.localizer.client.thread-count=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.done-dir=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.framework.name=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.failover-controller.new-active.rpc-timeout.ms=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.health-monitor.check-interval.ms=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.file.buffer.size=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.max.connections=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.path.based.cache.block.map.allocation.percent=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.tmp.dir=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.checkpoint.period=0A= 2015-02-21 17:54:59,320 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.mmap.cache.timeout.ms=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.kill.max=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.scheduler.class=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.taskcache.levels=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3.stream-buffer-size=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.secondary.http-address=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.nodemanager-connect.max-wait-ms=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.decommission.interval=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.http-address=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.files.preserve.failedtasks=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.encrypt.data.transfer=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.ha.enabled=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.address=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.avoid.write.stale.datanode=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.authentication.token.validity=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.group.mapping.ldap.search.filter.group=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.increment-allocation-vcores=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.failover.max.attempts=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.scheduler.monitor.policies=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.crypto.cipher.suite=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.profile.params=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.fs.state-store.retry-policy-spec=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.admin.acl=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.local-cache.max-files-per-directory=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.failover-retries-on-socket-timeouts=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.retrycache.expirytime.millis=0A= 2015-02-21 17:54:59,321 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = rpc.engine.org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.nodemanagers.heartbeat-interval-ms=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.failover.connection.retries.on.timeouts=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.failover-proxy-provider=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.sort.spill.percent=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = file.stream-buffer-size=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.webhdfs.enabled=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = zlib.compress.level=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.connection.maxidletime=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.output.key.class=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.combine.progress.records=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.persist.jobstatus.hours=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.resourcemanager.connect.retry_interval.secs=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.image.transfer.chunksize=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.address=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.ipc.address=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.ha.automatic-failover.embedded=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.recovery.store.fs.uri=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.zk-state-store.parent-path=0A= 2015-02-21 17:54:59,322 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.job.task.listener.thread-count=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = nfs.dump.dir=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.list.cache.pools.num.responses=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.read.shortcircuit=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.safemode.extension=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.generic-application-history.fs-history-store.compre= ssion-type=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.zookeeper.parent-znode=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.container-executor.class=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.skip.checksum.errors=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.path.based.cache.refresh.interval.ms=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.user.name=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.scheduler.client.thread-count=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.recovery.dir=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.authentication.kerberos.principal=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.log.level=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.linux-container-executor.nonsecure-mode.user-pattern=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3.maxRetries=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.resourcemanager.minimum.version=0A= 2015-02-21 17:54:59,323 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.kerberos.kinit.command=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.log-aggregation.retain-check-interval-seconds=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.process-kill-wait.ms=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.linux-container-executor.cgroups.mount=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.output.key.class=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.working.dir=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.name.dir.restore=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.handler.count=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.admin.address=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.client-am.ipc.max-retries=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.use.datanode.hostname=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.hue.hosts=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.util.hash.type=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.available-space-volume-choosing-policy.balanced-space-prefer= ence-fraction=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.dns.interface=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.seqfile.lazydecompress=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.disk-health-checker.min-healthy-disks=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.max-cached-nodemanagers-proxies=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.maxtaskfailures.per.tracker=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.healthchecker.script.timeout=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.httpfs.groups=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.group.mapping.ldap.search.attr.group.name=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.df.interval=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.crypto.buffer.size=0A= 2015-02-21 17:54:59,324 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.kerberos.internal.spnego.principal=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.cached.conn.retry=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.reduce.class=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.map.class=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.reduce.shuffle.consumer.plugin.class=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.address=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.tasks.sleeptimebeforesigkill=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.journalnode.rpc-address=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.fs-limits.max-blocks-per-file=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.acl-view-job=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.mapred.hosts=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.job.committer.cancel-timeout=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.block.write.replace-datanode-on-failure.policy=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.connection-keep-alive.enable=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.replication.interval=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.minicluster.fixed.ports=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.num.checkpoints.retained=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.scheduler.address=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.http.address=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.admin.acl=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.directoryscan.threads=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.memory.mb=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.group.mapping.ldap.ssl=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.http.policy=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.merge.progress.records=0A= 2015-02-21 17:54:59,325 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.heartbeat.interval=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.recovery.enabled=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = net.topology.script.number.args=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.local.clientfactory.class.name=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client-write-packet-size=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.group.mapping.ldap.directory.search.timeout=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.native.lib.available=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.failover.connection.retries=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.disk-health-checker.interval-ms=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.blocksize=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.use.legacy.blockreader.local=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.connection.ssl.enabled=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.webapp.address=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.resource-tracker.client.thread-count=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.failover-retries=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.blockreport.initialDelay=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.aux-services.mapreduce_shuffle.class=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.health-monitor.rpc-timeout.ms=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.zk-timeout-ms=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.markreset.buffer.percent=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.ha.tail-edits.period=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.admin.user.env=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.client.thread-count=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.health-checker.script.timeout-ms=0A= 2015-02-21 17:54:59,326 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = file.bytes-per-checksum=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.replication.max=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.max.extra.edits.segments.retained=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.map.index.skip=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.timeout=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.du.reserved=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.cpu.vcores=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.support.append=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.file-block-storage-locations.num-threads=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ftp.blocksize=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.container-manager.thread-count=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = net.topology.script.file.name=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.server.listen.queue.size=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.amliveliness-monitor.interval-ms=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.ha.automatic-failover.enabled=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.ssl.hostname.verifier=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = nfs.server.port=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.dns.interface=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.fair.preemption=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.group.mapping.ldap.search.attr.member=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.userlog.retain.hours=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.outofband.heartbeat=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.impl=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.name=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.resource.memory-mb=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.webhdfs.user.provider.user.pattern=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.delegation.token.renew-interval=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.ssl.keystores.factory.class=0A= 2015-02-21 17:54:59,327 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.http.policy=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.sync.behind.writes=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for nfs.wtmax=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.AbstractFileSystem.har.impl=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.read.shortcircuit.skip.checksum=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.random.device.file.path=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.maxattempts=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.webapp.address=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.handler.count=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.ssl.require.client.cert=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ftp.client-write-packet-size=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.write.exclude.nodes.cache.expiry.interval.millis=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.server.tcpnodelay=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.cleaner.enable=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.du.interval=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.retry-delay.max.ms=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.profile.reduces=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.health-monitor.connect-retry-interval.ms=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.fuse.connection.timeout=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.permissions.superusergroup=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.jobhistory.task.numberprogresssplits=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.ftp.host.port=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.speculative=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.data.dir.perm=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.client.submit.file.replication=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.startup.delay.block.deletion.sec=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3native.blocksize=0A= 2015-02-21 17:54:59,328 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.ubertask.maxmaps=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.replication.min=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.cluster.acls.enabled=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.uid.cache.secs=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.webapp.https.address=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = nfs.allow.insecure.ports=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.localizer.fetch.thread-count=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = map.sort.class=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.hue.groups=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.trash.checkpoint.interval=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapred.queue.default.acl-administer-jobs=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.image.transfer.timeout=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.name.dir=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.connect.timeout=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.staging-dir=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.AbstractFileSystem.file.impl=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.env-whitelist=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.keytab=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.image.compression.codec=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.reduces=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.complete.cancel.delegation.tokens=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.recovery.store.class=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.group.mapping.ldap.search.filter.user=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.enable.retrycache=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.sleep-delay-before-sigkill.ms=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.joblist.cache.size=0A= 2015-02-21 17:54:59,329 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.healthchecker.interval=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.heartbeats.in.second=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.auth_to_local=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.persist.jobstatus.dir=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.backup.http-address=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.rpc.protection=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.mmap.enabled=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.container.log.backups=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ftp.stream-buffer-size=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.https-address=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.address=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.ha.log-roll.period=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.recovery.enabled=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.input.fileinputformat.numinputfiles=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.groups.negative-cache.secs=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.admin.client.thread-count=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.fsdatasetcache.max.threads.per.volume=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = file.client-write-packet-size=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.authentication.simple.anonymous.allowed=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.leveldb-timeline-store.path=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.proxy-user-privileges.enabled=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.drop.cache.behind.reads=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.log.retain-seconds=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.image.transfer.bandwidthPerSec=0A= 2015-02-21 17:54:59,330 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.slow.io.warning.threshold.ms=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.instrumentation=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.failover-controller.cli-check.rpc-timeout.ms=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.linux-container-executor.cgroups.hierarchy=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.write.stale.datanode.ratio=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.groups.cache.warn.after.ms=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.fetch.retry.timeout-ms=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.client.thread-count=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.mapfile.bloom.size=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.work-preserving-recovery.enabled=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.ha.fencing.ssh.connect-timeout=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.zk-num-retries=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3.bytes-per-checksum=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.container.log.limit.kb=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.edit.log.autoroll.check.interval.ms=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.automatic.close=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.trash.interval=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.journalnode.https-address=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.ttl-ms=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.authentication=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.defaultFS=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.generic-application-history.enabled=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for nfs.rtmax=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.ssl.server.conf=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.connect.max.retries=0A= 2015-02-21 17:54:59,331 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = "hadoop.security.kms.client.encrypted.key.cache.expiry=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.delayed.delegation-token.removal-interval-ms=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.journalnode.http-address=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.xattrs.enabled=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.httpfs.hosts=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.shared.file.descriptor.paths=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.taskscheduler=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.speculative.speculativecap=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.store-class=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.am.liveness-monitor.expiry-interval-ms=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.output.fileoutputformat.compress=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.user.home.dir.prefix=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.log.level=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = net.topology.node.switch.mapping.impl=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.replication.considerLoad=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.fs-limits.min-block-size=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.swift.impl=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.audit.loggers=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.max.split.locations=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.address=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.counters.max=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.fetch.retry.enabled=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.block.write.retries=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.nm.liveness-monitor.interval-ms=0A= 2015-02-21 17:54:59,332 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.short.circuit.shared.memory.watcher.interrupt.check.ms=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.map.index.interval=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapred.child.java.opts=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.local.dir.minspacestart=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.client.progressmonitor.pollinterval=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.https.keystore.resource=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.profile.map.params=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = rpc.engine.org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.tasktracker.maxblacklists=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.queuename=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.localizer.address=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.mapfile.bloom.error.rate=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.delete.thread-count=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.split.metainfo.maxsize=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.maximum-allocation-vcores=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapred.mapper.new-api=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.tcpnodelay=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.dir=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.https.port=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.resource.mb=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.dns.nameserver=0A= 2015-02-21 17:54:59,333 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.slow.io.warning.threshold.ms=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.reducer.preempt.delay.sec=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.output.compress.codec=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.accesstime.precision=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.log.level=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.connection.maximum=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.seqfile.compress.blocksize=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.taskcontroller=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.groups.cache.secs=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.cache.revocation.timeout.ms=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.context=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.hive.groups=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.input.lineinputformat.linespermap=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.end-notification.max.attempts=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.webapp.address=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.submithostname=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.recovery.enable=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.expire.trackers.interval=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.flume.hosts=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.hdfs.hosts=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.webapp.address=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.kms.client.encrypted.key.cache.num.refill.threads=0A= 2015-02-21 17:54:59,334 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.health-checker.interval-ms=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.loadedjobs.cache.size=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.history-writer.multi-threaded-dispatcher.pool-size=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.authorization=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.mapred.groups=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.map.output.collector.class=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.am.max-attempts=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.ftp.host=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.attempts.maximum=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.servicerpc-address=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.ifile.readahead=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.scheduler.monitor.enable=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.zk-retry-interval-ms=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = rpc.engine.org.apache.hadoop.ipc.ProtocolMetaInfoPB=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.zookeeper.session-timeout.ms=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.taskmemorymanager.monitoringinterval=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.parallelcopies=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.mmap.retry.timeout.ms=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.skip.maxrecords=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.output.value.class=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.classloader.system.classes=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.avoid.read.stale.datanode=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.https.enable=0A= 2015-02-21 17:54:59,335 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.webapp.https.address=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.read.timeout=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.list.encryption.zones.num.responses=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.remote-app-log-dir-suffix=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.output.fileoutputformat.compress.codec=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.instrumentation=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.blockreport.intervalMsec=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.connect.retry.interval=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.speculative=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.keytab=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.datestring.cache.size=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.balance.bandwidthPerSec=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = file.blocksize=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.admin.address=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.cpu.vcores=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.container-monitor.procfs-tree.smaps-based-rss.enabled=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.configuration.provider-class=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.resource-tracker.address=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.local.dir.minspacekill=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.staging.root.dir=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.retiredjobs.cache.size=0A= 2015-02-21 17:54:59,336 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.connect.max.retries.on.timeouts=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.zookeeper.acl=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.crypto.codec.classes.aes.ctr.nopadding=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.local-dirs=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.app-submission.cross-platform=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.connect.timeout=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.block.access.key.update.interval=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = rpc.metrics.quantile.enable=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.block.access.token.lifetime=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.end-notification.retry.attempts=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.fs-limits.max-xattrs-per-inode=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.system.dir=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.file-block-storage-locations.timeout.millis=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.admin-env=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.jobhistory.block.size=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.log-aggregation.retain-seconds=0A= 2015-02-21 17:54:59,340 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.indexcache.mb=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.handler-thread-count=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.checkpoint.check.period=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.hostname=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.block.write.replace-datanode-on-failure.enable=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = net.topology.impl=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.directoryscan.interval=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.multipart.purge.age=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.java.secure.random.algorithm=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.container-monitor.interval-ms=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.default.chunk.view.size=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.multipart.threshold=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.speculative.slownodethreshold=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.reduce.slowstart.completedmaps=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.HTTP.groups=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapred.reducer.new-api=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.instrumentation.requires.admin=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.compression.codec.bzip2.library=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.authentication.signature.secret.file=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.safemode.min.datanodes=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.localizer.cache.target-size-mb=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.input.fileinputformat.inputdir=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.maxattempts=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.https.address=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3native.replication=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.inotify.max.events.per.rpc=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.path.based.cache.retry.interval.ms=0A= 2015-02-21 17:54:59,341 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.skip.proc.count.autoincr=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.cache.revocation.polling.ms=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.cleaner.interval-ms=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = file.replication=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.hdfs.configuration.version=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.flume.groups=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.idlethreshold=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.tmp.dir=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.store.class=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.address=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.restart.recover=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.cluster.local.dir=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.nodemanager-client-async.thread-pool-max-size=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.ipc.serializer.type=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.decommission.nodes.per.interval=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.resource.cpu-vcores=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.reject-unresolved-dn-topology-mapping=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.delegation.key.update-interval=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3.buffer.dir=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.read.shortcircuit.streams.cache.expiry.ms=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.support.allow.format=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.remote-app-log-dir=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.compression.codecs=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.memory.mb=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.edit.log.autoroll.multiplier.threshold=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.work.around.non.threadsafe.getpwuid=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.profile.reduce.params=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.ha.automatic-failover.enabled=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.edits.noeditlogchannelflush=0A= 2015-02-21 17:54:59,342 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.stale.datanode.interval=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.transfer.buffer.size=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.persist.jobstatus.active=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.logging.level=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.log-dirs=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.health-monitor.sleep-after-disconnect.ms=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.fs.state-store.uri=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.checkpoint.edits.dir=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.keytab=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.rpc.socket.factory.class.default=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.http.address=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.profile=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.move.interval-ms=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.fair.sizebasedweight=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.edits.dir=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.kms.client.encrypted.key.cache.size=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.dispatcher.exit-on-error=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.fuse.timer.period=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.http.policy=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.intermediate-done-dir=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.skip.proc.count.autoincr=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.AbstractFileSystem.viewfs.impl=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.speculative.slowtaskthreshold=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3native.stream-buffer-size=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.delete.debug-delay-sec=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.secondary.namenode.kerberos.internal.spnego.principal=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.available-space-volume-choosing-policy.balanced-space-thresh= old=0A= 2015-02-21 17:54:59,343 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3n.multipart.uploads.block.size=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.safemode.threshold-pct=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.ifile.readahead.bytes=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.maximum-allocation-mb=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.fallback-to-simple-auth-allowed=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.har.impl.disable.cache=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.leveldb-timeline-store.read-cache-size=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.hostname=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3native.bytes-per-checksum=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.committer.setup.cleanup.needed=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.paging.maximum=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.nodemanager-connect.retry-interval-ms=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.log-aggregation.compression-type=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.job.committer.commit-window=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.authentication.type=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.failover.sleep.base.millis=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.submithostaddress=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.vmem-check-enabled=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.jetty.logs.serve.aliases=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.failover-controller.graceful-fence.rpc-timeout.ms=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.input.buffer.percent=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.max.transfer.threads=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.merge.inmem.threshold=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.io.sort.mb=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.acls.enabled=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.kms.client.authentication.retry-count=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.application-client-protocol.poll-interval-ms=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.handler.count=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.connect.max-wait.ms=0A= 2015-02-21 17:54:59,344 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.retrycache.heap.percent=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.enabled=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.ssl.client.conf=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.container.liveness-monitor.interval-ms=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.vmem-pmem-ratio=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.client.completion.pollinterval=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.client.max-retries=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.ssl.enabled=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.client.resolve.remote.symlinks=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.AbstractFileSystem.hdfs.impl=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.admin.user.env=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.java.opts=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.client.genericoptionsparser.used=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.reduce.tasks.maximum=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.java.opts=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.hostname=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.input.buffer.percent=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.fair.assignmultiple=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.multipart.purge=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.command-opts=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.invalidate.work.pct.per.iteration=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.bytes-per-checksum=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.oozie.groups=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.webapp.https.address=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.replication=0A= 2015-02-21 17:54:59,345 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.block.id.layout.upgrade.threads=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.ssl.file.buffer.size=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.list.cache.directives.num.responses=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.permissions.enabled=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.oozie.hosts=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.maxtasks.perjob=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.use.datanode.hostname=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.userlog.limit.kb=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.fs-limits.max-directory-items=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.kms.client.encrypted.key.cache.low-watermark=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.buffer.dir=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3.client-write-packet-size=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.user.group.static.mapping.overrides=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.max.threads=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.failover.sleep.max.millis=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.maps=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.fs-limits.max-component-length=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.root.logger=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.ssl.enabled.protocols=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3.blocksize=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.output.compress=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.generic-application-history.fs-history-store.uri=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.edits.journal-plugin.qjournal=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.datanode.registration.ip-hostname-check=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.pmem-check-enabled=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.short.circuit.replica.stale.threshold.ms=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.https.need-auth=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.minimum-allocation-mb=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.hive.hosts=0A= 2015-02-21 17:54:59,346 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.max-age-ms=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ftp.replication=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.secondary.https-address=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.blockreport.split.threshold=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.input.fileinputformat.split.minsize=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3n.block.size=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.token.tracking.ids.enabled=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.ipc.rpc.class=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.num.extra.edits.retained=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.localizer.cache.cleanup.interval-ms=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.staticuser.user=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.move.thread-count=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.multipart.size=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.jvm.numtasks=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.profile.maps=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.resourcemanager.connect.wait.secs=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.max.locked.memory=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.cachereport.intervalMsec=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.port=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.nodemanager.minimum.version=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.connection-keep-alive.timeout=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.merge.percent=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.http.address=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.skip.start.attempts=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.connect.retry-interval.ms=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.minimum-allocation-vcores=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.io.sort.factor=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.checkpoint.dir=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = nfs.exports.allowed.hosts=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = tfile.fs.input.buffer.size=0A= 2015-02-21 17:54:59,347 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3.block.size=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = tfile.io.chunk.size=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3n.multipart.copy.block.size=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.serializations=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.max-completed-applications=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.principal=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.output.fileoutputformat.outputdir=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.ha.automatic-failover.zk-base-path=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.fetch.retry.interval-ms=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.end-notification.retry.interval=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.backup.address=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3n.multipart.uploads.enabled=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.seqfile.sorter.recordlimit=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.block.access.token.enable=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3native.client-write-packet-size=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.fs-limits.max-xattr-size=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ftp.bytes-per-checksum=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.group.mapping=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.domain.socket.data.traffic=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.read.shortcircuit.streams.cache.size=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.connection.timeout=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.end-notification.max.retry.interval=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.acl.enable=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nm.liveness-monitor.expiry-interval-ms=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.application.classpath=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.root.logger=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.input.fileinputformat.list-status.num-threads=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.mmap.cache.size=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.map.tasks.maximum=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.fair.user-as-default-queue=0A= 2015-02-21 17:54:59,348 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.ttl-enable=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.linux-container-executor.resources-handler.class=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.max.objects=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.state-store.max-completed-applications=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.delegation.token.max-lifetime=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.classloader=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.hdfs-servers=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.application.classpath=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.hdfs-blocks-metadata.enabled=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.dns.nameserver=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.readahead.bytes=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.ubertask.maxreduces=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.image.compress=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.ssl.enabled=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.log-aggregation-enable=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.report.address=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.http.threads=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.stream-buffer-size=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = tfile.fs.output.buffer.size=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.permissions.umask-mode=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.datanode-restart.timeout=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.am.max-attempts=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.failover-controller.graceful-fence.connection.retries=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.hdfs.groups=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.drop.cache.behind.writes=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.application.attempt.id=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.output.value.class=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.HTTP.hosts=0A= 2015-02-21 17:54:59,349 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.common.configuration.version=0A= 2015-02-21 17:54:59,350 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.ubertask.enable=0A= 2015-02-21 17:54:59,350 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.resource.cpu-vcores=0A= 2015-02-21 17:54:59,350 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.replication.work.multiplier.per.iteration=0A= 2015-02-21 17:54:59,350 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.acl-modify-job=0A= 2015-02-21 17:54:59,350 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.seqfile.local.dir=0A= 2015-02-21 17:54:59,350 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3.sleepTimeSeconds=0A= 2015-02-21 17:54:59,350 DEBUG [main] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.client.output.filter=0A= 2015-02-21 17:54:59,506 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = DFSClient writeChunk allocating new packet seqno=3D0, = src=3D/user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_00= 01_1_conf.xml, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D0=0A= 2015-02-21 17:54:59,589 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = DFSClient writeChunk packet full seqno=3D0, = src=3D/user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_00= 01_1_conf.xml, bytesCurBlock=3D65024, blockSize=3D134217728, = appendChunk=3Dfalse=0A= 2015-02-21 17:54:59,589 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = Queued packet 0=0A= 2015-02-21 17:54:59,589 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = computePacketChunkSize: = src=3D/user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_00= 01_1_conf.xml, chunkSize=3D516, chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 17:54:59,589 DEBUG [Thread-54] = org.apache.hadoop.hdfs.DFSClient: Allocating new block=0A= 2015-02-21 17:54:59,589 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = DFSClient writeChunk allocating new packet seqno=3D1, = src=3D/user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_00= 01_1_conf.xml, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D65024=0A= 2015-02-21 17:54:59,607 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #13=0A= 2015-02-21 17:54:59,615 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #13=0A= 2015-02-21 17:54:59,615 DEBUG [Thread-54] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: addBlock took 8ms=0A= 2015-02-21 17:54:59,618 DEBUG [Thread-54] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.253:50010=0A= 2015-02-21 17:54:59,618 DEBUG [Thread-54] = org.apache.hadoop.hdfs.DFSClient: Connecting to datanode = 192.168.2.253:50010=0A= 2015-02-21 17:54:59,619 DEBUG [Thread-54] = org.apache.hadoop.hdfs.DFSClient: Send buf size 124928=0A= 2015-02-21 17:54:59,619 DEBUG [Thread-54] = org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient:= SASL client skipping handshake in unsecured configuration for addr =3D = /192.168.2.253, datanodeId =3D 192.168.2.253:50010=0A= 2015-02-21 17:54:59,624 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = Queued packet 1=0A= 2015-02-21 17:54:59,624 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = Queued packet 2=0A= 2015-02-21 17:54:59,624 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = Waiting for ack for: 2=0A= 2015-02-21 17:54:59,692 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_0001_1_c= onf.xml block = BP-268700609-192.168.2.253-1419532004456:blk_1073754485_13661] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754485_13661 sending = packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false = lastByteOffsetInBlock: 65024=0A= 2015-02-21 17:54:59,696 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_0001_1_c= onf.xml block = BP-268700609-192.168.2.253-1419532004456:blk_1073754485_13661] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754485_13661 sending = packet packet seqno:1 offsetInBlock:65024 lastPacketInBlock:false = lastByteOffsetInBlock: 108059=0A= 2015-02-21 17:54:59,724 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754485_13661] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS = downstreamAckTimeNanos: 0=0A= 2015-02-21 17:54:59,724 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754485_13661] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS = downstreamAckTimeNanos: 0=0A= 2015-02-21 17:54:59,724 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_0001_1_c= onf.xml block = BP-268700609-192.168.2.253-1419532004456:blk_1073754485_13661] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754485_13661 sending = packet packet seqno:2 offsetInBlock:108059 lastPacketInBlock:true = lastByteOffsetInBlock: 108059=0A= 2015-02-21 17:54:59,727 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754485_13661] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 2 status: SUCCESS = downstreamAckTimeNanos: 0=0A= 2015-02-21 17:54:59,727 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_0001_1_c= onf.xml block = BP-268700609-192.168.2.253-1419532004456:blk_1073754485_13661] = org.apache.hadoop.hdfs.DFSClient: Closing old block = BP-268700609-192.168.2.253-1419532004456:blk_1073754485_13661=0A= 2015-02-21 17:54:59,729 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #14=0A= 2015-02-21 17:54:59,737 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #14=0A= 2015-02-21 17:54:59,737 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: complete took 8ms=0A= 2015-02-21 17:54:59,743 DEBUG [main] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 17:54:59,758 DEBUG [main] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler AM_STARTED=0A= 2015-02-21 17:54:59,758 INFO [main] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop, = writing event JOB_SUBMITTED=0A= 2015-02-21 17:54:59,758 DEBUG [main] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 17:54:59,760 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = /user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_0001.sum= mary: masked=3Drw-r--r--=0A= 2015-02-21 17:54:59,760 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #15=0A= 2015-02-21 17:54:59,771 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #15=0A= 2015-02-21 17:54:59,771 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: create took 11ms=0A= 2015-02-21 17:54:59,771 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = computePacketChunkSize: = src=3D/user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_00= 01.summary, chunkSize=3D516, chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 17:54:59,772 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = DFSClient writeChunk allocating new packet seqno=3D0, = src=3D/user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_00= 01.summary, packetSize=3D65532, chunksPerPacket=3D127, bytesCurBlock=3D0=0A= 2015-02-21 17:54:59,772 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = Queued packet 0=0A= 2015-02-21 17:54:59,772 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = Queued packet 1=0A= 2015-02-21 17:54:59,772 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = Waiting for ack for: 1=0A= 2015-02-21 17:54:59,772 DEBUG [Thread-57] = org.apache.hadoop.hdfs.DFSClient: Allocating new block=0A= 2015-02-21 17:54:59,773 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #16=0A= 2015-02-21 17:54:59,782 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #16=0A= 2015-02-21 17:54:59,782 DEBUG [Thread-57] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: addBlock took 9ms=0A= 2015-02-21 17:54:59,783 DEBUG [Thread-57] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.253:50010=0A= 2015-02-21 17:54:59,783 DEBUG [Thread-57] = org.apache.hadoop.hdfs.DFSClient: Connecting to datanode = 192.168.2.253:50010=0A= 2015-02-21 17:54:59,783 DEBUG [Thread-57] = org.apache.hadoop.hdfs.DFSClient: Send buf size 124928=0A= 2015-02-21 17:54:59,783 DEBUG [Thread-57] = org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient:= SASL client skipping handshake in unsecured configuration for addr =3D = /192.168.2.253, datanodeId =3D 192.168.2.253:50010=0A= 2015-02-21 17:54:59,785 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_0001.sum= mary block = BP-268700609-192.168.2.253-1419532004456:blk_1073754486_13662] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754486_13662 sending = packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false = lastByteOffsetInBlock: 301=0A= 2015-02-21 17:54:59,789 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754486_13662] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS = downstreamAckTimeNanos: 0=0A= 2015-02-21 17:54:59,789 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_0001.sum= mary block = BP-268700609-192.168.2.253-1419532004456:blk_1073754486_13662] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754486_13662 sending = packet packet seqno:1 offsetInBlock:301 lastPacketInBlock:true = lastByteOffsetInBlock: 301=0A= 2015-02-21 17:54:59,791 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754486_13662] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS = downstreamAckTimeNanos: 0=0A= 2015-02-21 17:54:59,792 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #17=0A= 2015-02-21 17:54:59,804 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #17=0A= 2015-02-21 17:54:59,804 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: complete took 13ms=0A= 2015-02-21 17:54:59,804 DEBUG [main] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler JOB_SUBMITTED=0A= 2015-02-21 17:54:59,804 DEBUG [main] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Closing = Writer=0A= 2015-02-21 17:54:59,804 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = Queued packet 0=0A= 2015-02-21 17:54:59,804 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = Queued packet 1=0A= 2015-02-21 17:54:59,804 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = Waiting for ack for: 1=0A= 2015-02-21 17:54:59,804 DEBUG [Thread-52] = org.apache.hadoop.hdfs.DFSClient: Allocating new block=0A= 2015-02-21 17:54:59,805 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #18=0A= 2015-02-21 17:54:59,815 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #18=0A= 2015-02-21 17:54:59,815 DEBUG [Thread-52] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: addBlock took 11ms=0A= 2015-02-21 17:54:59,815 DEBUG [Thread-52] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.253:50010=0A= 2015-02-21 17:54:59,815 DEBUG [Thread-52] = org.apache.hadoop.hdfs.DFSClient: Connecting to datanode = 192.168.2.253:50010=0A= 2015-02-21 17:54:59,816 DEBUG [Thread-52] = org.apache.hadoop.hdfs.DFSClient: Send buf size 124928=0A= 2015-02-21 17:54:59,816 DEBUG [Thread-52] = org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient:= SASL client skipping handshake in unsecured configuration for addr =3D = /192.168.2.253, datanodeId =3D 192.168.2.253:50010=0A= 2015-02-21 17:54:59,817 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_0001_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754487_13663] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754487_13663 sending = packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false = lastByteOffsetInBlock: 8500=0A= 2015-02-21 17:54:59,818 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754487_13663] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS = downstreamAckTimeNanos: 0=0A= 2015-02-21 17:54:59,818 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_0001_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754487_13663] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754487_13663 sending = packet packet seqno:1 offsetInBlock:8500 lastPacketInBlock:true = lastByteOffsetInBlock: 8500=0A= 2015-02-21 17:54:59,820 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754487_13663] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS = downstreamAckTimeNanos: 0=0A= 2015-02-21 17:54:59,821 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0001/job_1424550134651_0001_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754487_13663] = org.apache.hadoop.hdfs.DFSClient: Closing old block = BP-268700609-192.168.2.253-1419532004456:blk_1073754487_13663=0A= 2015-02-21 17:54:59,821 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #19=0A= 2015-02-21 17:54:59,826 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #19=0A= 2015-02-21 17:54:59,826 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: complete took 5ms=0A= 2015-02-21 17:54:59,826 INFO [main] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped = JobHistoryEventHandler. super.stop()=0A= 2015-02-21 17:54:59,826 DEBUG [main] = org.apache.hadoop.service.CompositeService: Stopping service #5: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter = in state = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter: = INITED=0A= 2015-02-21 17:54:59,826 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter = entered state STOPPED=0A= 2015-02-21 17:54:59,826 DEBUG [main] = org.apache.hadoop.service.CompositeService: Stopping service #4: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter = in state = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter: = STOPPED=0A= 2015-02-21 17:54:59,826 DEBUG [main] = org.apache.hadoop.service.CompositeService: Stopping service #3: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$StagingDirCleaningService = in state = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$StagingDirCleaningService:= STARTED=0A= 2015-02-21 17:54:59,826 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$StagingDirCleaningService = entered state STOPPED=0A= 2015-02-21 17:54:59,826 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Skipping cleaning up the = staging dir. assuming AM will be retried.=0A= 2015-02-21 17:54:59,826 DEBUG [main] = org.apache.hadoop.service.CompositeService: Stopping service #2: Service = org.apache.hadoop.mapred.TaskAttemptListenerImpl in state = org.apache.hadoop.mapred.TaskAttemptListenerImpl: STARTED=0A= 2015-02-21 17:54:59,827 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapred.TaskAttemptListenerImpl entered state STOPPED=0A= 2015-02-21 17:54:59,827 INFO [main] org.apache.hadoop.ipc.Server: = Stopping server on 50483=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 0 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 1 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 2 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 3 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 3 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 4 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 5 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 5 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 6 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 7 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 7 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 8 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 9 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 9 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 10 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 10 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 11 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 12 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 12 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 13 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 13 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 14 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 14 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 15 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 15 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 16 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 16 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 18 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 18 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 17 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 17 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 20 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 20 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 19 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 19 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 22 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 22 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 21 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 21 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 24 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 24 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 23 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 50483: exiting=0A= 2015-02-21 17:54:59,827 DEBUG [IPC Server handler 26 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 26 on 50483: exiting=0A= 2015-02-21 17:54:59,828 DEBUG [IPC Server handler 25 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 25 on 50483: exiting=0A= 2015-02-21 17:54:59,828 DEBUG [IPC Server handler 28 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 50483: exiting=0A= 2015-02-21 17:54:59,828 DEBUG [IPC Server handler 27 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 27 on 50483: exiting=0A= 2015-02-21 17:54:59,828 DEBUG [IPC Server handler 29 on 50483] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 50483: exiting=0A= 2015-02-21 17:54:59,829 INFO [IPC Server listener on 50483] = org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 50483=0A= 2015-02-21 17:54:59,829 DEBUG [main] = org.apache.hadoop.service.CompositeService: = org.apache.hadoop.mapred.TaskAttemptListenerImpl: stopping services, = size=3D1=0A= 2015-02-21 17:54:59,830 DEBUG [main] = org.apache.hadoop.service.CompositeService: Stopping service #0: Service = TaskHeartbeatHandler in state TaskHeartbeatHandler: STARTED=0A= 2015-02-21 17:54:59,830 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: TaskHeartbeatHandler = entered state STOPPED=0A= 2015-02-21 17:54:59,830 DEBUG [main] = org.apache.hadoop.service.CompositeService: Stopping service #1: Service = CommitterEventHandler in state CommitterEventHandler: STARTED=0A= 2015-02-21 17:54:59,830 INFO [TaskHeartbeatHandler PingChecker] = org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: = TaskHeartbeatHandler thread interrupted=0A= 2015-02-21 17:54:59,830 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = CommitterEventHandler entered state STOPPED=0A= 2015-02-21 17:54:59,830 DEBUG [main] = org.apache.hadoop.service.CompositeService: Stopping service #0: Service = Dispatcher in state Dispatcher: STARTED=0A= 2015-02-21 17:54:59,830 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: Dispatcher entered = state STOPPED=0A= 2015-02-21 17:54:59,830 DEBUG [IPC Server Responder] = org.apache.hadoop.ipc.Server: Checking for old call responses.=0A= 2015-02-21 17:54:59,830 INFO [IPC Server Responder] = org.apache.hadoop.ipc.Server: Stopping IPC Server Responder=0A= =0A= =0A= ------=_NextPart_000_0058_01D04E73.24958A50 Content-Type: application/octet-stream; name="Successful RM Log.dat" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="Successful RM Log.dat" =0A= Logged in as: dr.who=0A= About Apache Hadoop=0A= Application=0A= =0A= About=0A= Jobs =0A= =0A= Tools=0A= =0A= =0A= Log Type: syslog=0A= =0A= Log Length: 1100366=0A= =0A= 2015-02-21 19:01:15,710 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JvmMetrics, JVM = related metrics etc.=0A= 2015-02-21 19:01:15,730 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.jobsSubmitted = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,743 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.jobsCompleted = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,743 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.jobsFailed with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,743 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.jobsKilled with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,743 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableGaugeInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.jobsPreparing = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,744 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableGaugeInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.jobsRunning with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,744 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.mapsLaunched = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,744 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.mapsCompleted = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,744 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.mapsFailed with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,745 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.mapsKilled with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,745 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableGaugeInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.mapsRunning with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,745 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableGaugeInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.mapsWaiting with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,745 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.reducesLaunched = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,746 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.reducesCompleted = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,746 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.reducesFailed = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,747 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.reducesKilled = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,747 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableGaugeInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.reducesRunning = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,747 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableGaugeInt = org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics.reducesWaiting = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:15,749 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMetrics, MR App = Metrics=0A= 2015-02-21 19:01:15,765 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for = application appattempt_1424550134651_0002_000001=0A= 2015-02-21 19:01:15,800 DEBUG [main] org.apache.hadoop.util.Shell: = Failed to detect a valid hadoop home directory=0A= java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.=0A= at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:302)=0A= at org.apache.hadoop.util.Shell.(Shell.java:327)=0A= at org.apache.hadoop.util.StringUtils.(StringUtils.java:79)=0A= at = org.apache.hadoop.yarn.conf.YarnConfiguration.(YarnConfiguration.= java:552)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1396= )=0A= 2015-02-21 19:01:15,809 DEBUG [main] org.apache.hadoop.util.Shell: = setsid exited with exit code 0=0A= 2015-02-21 19:01:16,160 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.require.client.cert; Ignoring.=0A= 2015-02-21 19:01:16,163 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.retry.interval; = Ignoring.=0A= 2015-02-21 19:01:16,163 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.client.conf; Ignoring.=0A= 2015-02-21 19:01:16,165 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.keystores.factory.class; Ignoring.=0A= 2015-02-21 19:01:16,168 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.server.conf; Ignoring.=0A= 2015-02-21 19:01:16,198 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.=0A= 2015-02-21 19:01:16,216 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRate = org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Rate of successful kerberos logins and latency (milliseconds)], = about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:16,217 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRate = org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Rate of failed kerberos logins and latency (milliseconds)], = about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:16,218 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRate = org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups = with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[GetGroups], about=3D, type=3DDEFAULT, always=3Dfalse, = sampleName=3DOps)=0A= 2015-02-21 19:01:16,218 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: UgiMetrics, User and = group related metrics=0A= 2015-02-21 19:01:16,312 DEBUG [main] org.apache.hadoop.security.Groups: = Creating new Groups object=0A= 2015-02-21 19:01:16,314 DEBUG [main] org.apache.hadoop.security.Groups: = Group mapping = impl=3Dorg.apache.hadoop.security.ShellBasedUnixGroupsMapping; = cacheTimeout=3D300000; warningDeltaMs=3D5000=0A= 2015-02-21 19:01:16,318 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: hadoop login=0A= 2015-02-21 19:01:16,318 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: hadoop login commit=0A= 2015-02-21 19:01:16,322 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: using local = user:UnixPrincipal: yarn=0A= 2015-02-21 19:01:16,329 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: UGI loginUser:yarn = (auth:SIMPLE)=0A= 2015-02-21 19:01:16,330 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:=0A= 2015-02-21 19:01:16,330 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, = Service: , Ident: = (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@5d3c9c43)=0A= 2015-02-21 19:01:16,366 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster= (MRAppMaster.java:1474)=0A= 2015-02-21 19:01:16,367 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster entered state INITED=0A= 2015-02-21 19:01:16,382 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred = newApiCommitter.=0A= 2015-02-21 19:01:16,543 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.require.client.cert; Ignoring.=0A= 2015-02-21 19:01:16,545 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.retry.interval; = Ignoring.=0A= 2015-02-21 19:01:16,546 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.client.conf; Ignoring.=0A= 2015-02-21 19:01:16,547 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.keystores.factory.class; Ignoring.=0A= 2015-02-21 19:01:16,549 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.server.conf; Ignoring.=0A= 2015-02-21 19:01:16,560 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.=0A= 2015-02-21 19:01:16,603 DEBUG [main] = org.apache.hadoop.hdfs.BlockReaderLocal: = dfs.client.use.legacy.blockreader.local =3D false=0A= 2015-02-21 19:01:16,603 DEBUG [main] = org.apache.hadoop.hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = =3D false=0A= 2015-02-21 19:01:16,603 DEBUG [main] = org.apache.hadoop.hdfs.BlockReaderLocal: = dfs.client.domain.socket.data.traffic =3D false=0A= 2015-02-21 19:01:16,603 DEBUG [main] = org.apache.hadoop.hdfs.BlockReaderLocal: dfs.domain.socket.path =3D = /var/run/hdfs-sockets/dn=0A= 2015-02-21 19:01:16,623 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = No KeyProvider found.=0A= 2015-02-21 19:01:16,655 DEBUG [main] = org.apache.hadoop.io.retry.RetryUtils: multipleLinearRandomRetry =3D null=0A= 2015-02-21 19:01:16,684 DEBUG [main] org.apache.hadoop.ipc.Server: = rpcKind=3DRPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=3Dclass = org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, = rpcInvoker=3Dorg.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcIn= voker@3996a317=0A= 2015-02-21 19:01:16,690 DEBUG [main] org.apache.hadoop.ipc.Client: = getting client out of cache: org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:16,893 DEBUG [Finalizer] = org.apache.hadoop.fs.azure.NativeAzureFileSystem: finalize() called.=0A= 2015-02-21 19:01:16,894 DEBUG [Finalizer] = org.apache.hadoop.fs.azure.NativeAzureFileSystem: finalize() called.=0A= 2015-02-21 19:01:17,171 DEBUG [main] = org.apache.hadoop.util.NativeCodeLoader: Trying to load the custom-built = native-hadoop library...=0A= 2015-02-21 19:01:17,172 DEBUG [main] = org.apache.hadoop.util.NativeCodeLoader: Failed to load native-hadoop = with error: java.lang.UnsatisfiedLinkError: no hadoop in = java.library.path=0A= 2015-02-21 19:01:17,172 DEBUG [main] = org.apache.hadoop.util.NativeCodeLoader: = java.library.path=3D/data/yarn/nm/usercache/cloudera/appcache/application= _1424550134651_0002/container_1424550134651_0002_01_000001:/lib/native::/= usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib=0A= 2015-02-21 19:01:17,172 WARN [main] = org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop = library for your platform... using builtin-java classes where applicable=0A= 2015-02-21 19:01:17,220 DEBUG [main] = org.apache.hadoop.util.PerformanceAdvisory: Both short-circuit local = reads and UNIX domain socket are disabled.=0A= 2015-02-21 19:01:17,225 DEBUG [main] = org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil: = DataTransferProtocol not using SaslPropertiesResolver, no QOP found in = configuration for dfs.data.transfer.protection=0A= 2015-02-21 19:01:17,242 DEBUG [main] org.apache.hadoop.ipc.Client: The = ping interval is 60000 ms.=0A= 2015-02-21 19:01:17,243 DEBUG [main] org.apache.hadoop.ipc.Client: = Connecting to hadoop0.rdpratti.com/192.168.2.253:8020=0A= 2015-02-21 19:01:17,255 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:17,345 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:17,355 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"9hMeV3S1Fw5yTAznf8uNAf6Fh4xFO8QJytWmSbol\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 19:01:17,356 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB = info:@org.apache.hadoop.security.token.TokenInfo(value=3Dclass = org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector)=0A= 2015-02-21 19:01:17,357 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Use SIMPLE authentication for = protocol ClientNamenodeProtocolPB=0A= 2015-02-21 19:01:17,357 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 19:01:17,366 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera: starting, having = connections 1=0A= 2015-02-21 19:01:17,369 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #0=0A= 2015-02-21 19:01:17,400 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #0=0A= 2015-02-21 19:01:17,400 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 172ms=0A= 2015-02-21 19:01:17,470 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #1=0A= 2015-02-21 19:01:17,471 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #1=0A= 2015-02-21 19:01:17,472 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 2ms=0A= 2015-02-21 19:01:17,472 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #2=0A= 2015-02-21 19:01:17,473 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #2=0A= 2015-02-21 19:01:17,473 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:17,474 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #3=0A= 2015-02-21 19:01:17,475 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #3=0A= 2015-02-21 19:01:17,475 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:17,475 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in = config null=0A= 2015-02-21 19:01:17,561 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is = org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter=0A= 2015-02-21 19:01:17,562 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service Dispatcher=0A= 2015-02-21 19:01:17,569 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.client.MRClientService entered state = INITED=0A= 2015-02-21 19:01:17,572 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service = CommitterEventHandler=0A= 2015-02-21 19:01:17,579 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service = org.apache.hadoop.mapred.TaskAttemptListenerImpl=0A= 2015-02-21 19:01:17,583 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.jobhistory.EventType for class = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler=0A= 2015-02-21 19:01:17,584 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher=0A= 2015-02-21 19:01:17,585 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher=0A= 2015-02-21 19:01:17,586 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher=0A= 2015-02-21 19:01:17,586 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class = org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler=0A= 2015-02-21 19:01:17,587 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher=0A= 2015-02-21 19:01:17,587 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$StagingDirCleaningService=0A= 2015-02-21 19:01:17,587 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter=0A= 2015-02-21 19:01:17,588 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter=0A= 2015-02-21 19:01:17,588 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter=0A= 2015-02-21 19:01:17,589 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType = for class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter=0A= 2015-02-21 19:01:17,589 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service = JobHistoryEventHandler=0A= 2015-02-21 19:01:17,589 DEBUG [main] = org.apache.hadoop.service.CompositeService: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: initing services, = size=3D7=0A= 2015-02-21 19:01:17,589 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: Dispatcher entered = state INITED=0A= 2015-02-21 19:01:17,589 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = CommitterEventHandler entered state INITED=0A= 2015-02-21 19:01:17,590 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapred.TaskAttemptListenerImpl entered state INITED=0A= 2015-02-21 19:01:17,592 DEBUG [main] = org.apache.hadoop.service.CompositeService: Adding service = TaskHeartbeatHandler=0A= 2015-02-21 19:01:17,592 DEBUG [main] = org.apache.hadoop.service.CompositeService: = org.apache.hadoop.mapred.TaskAttemptListenerImpl: initing services, = size=3D1=0A= 2015-02-21 19:01:17,592 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: TaskHeartbeatHandler = entered state INITED=0A= 2015-02-21 19:01:17,592 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$StagingDirCleaningService = entered state INITED=0A= 2015-02-21 19:01:17,592 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter = entered state INITED=0A= 2015-02-21 19:01:17,592 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter = entered state INITED=0A= 2015-02-21 19:01:17,592 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = JobHistoryEventHandler entered state INITED=0A= 2015-02-21 19:01:17,596 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #4=0A= 2015-02-21 19:01:17,597 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #4=0A= 2015-02-21 19:01:17,597 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 2ms=0A= 2015-02-21 19:01:17,598 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #5=0A= 2015-02-21 19:01:17,599 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #5=0A= 2015-02-21 19:01:17,599 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:17,600 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #6=0A= 2015-02-21 19:01:17,601 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #6=0A= 2015-02-21 19:01:17,601 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:17,645 INFO [main] = org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class = org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for = class = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler=0A= 2015-02-21 19:01:17,858 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: from system property: = null=0A= 2015-02-21 19:01:17,858 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: from environment = variable: null=0A= 2015-02-21 19:01:17,879 DEBUG [main] = org.apache.commons.configuration.ConfigurationUtils: = ConfigurationUtils.locate(): base is null, name is = hadoop-metrics2-mrappmaster.properties=0A= 2015-02-21 19:01:17,882 DEBUG [main] = org.apache.commons.configuration.ConfigurationUtils: = ConfigurationUtils.locate(): base is null, name is = hadoop-metrics2.properties=0A= 2015-02-21 19:01:17,882 DEBUG [main] = org.apache.commons.configuration.ConfigurationUtils: Loading = configuration from the context classpath (hadoop-metrics2.properties)=0A= 2015-02-21 19:01:17,886 INFO [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from = hadoop-metrics2.properties=0A= 2015-02-21 19:01:17,887 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: =0A= 2015-02-21 19:01:17,887 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: =0A= 2015-02-21 19:01:17,890 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: period=0A= 2015-02-21 19:01:17,893 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableStat = org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotStat with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Snapshot, Snapshot stats], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:17,894 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableStat = org.apache.hadoop.metrics2.impl.MetricsSystemImpl.publishStat with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Publish, Publishing stats], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:17,894 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterLong = org.apache.hadoop.metrics2.impl.MetricsSystemImpl.droppedPubAll with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Dropped updates by all sinks], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:17,898 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 19:01:17,898 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 19:01:17,898 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 19:01:17,947 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 19:01:17,948 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D10=0A= 2015-02-21 19:01:17,948 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 19:01:17,948 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of active = metrics sources, name=3DNumActiveSources, type=3Djava.lang.Integer, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of all = registered metrics sources, name=3DNumAllSources, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of active = metrics sinks, name=3DNumActiveSinks, type=3Djava.lang.Integer, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of all = registered metrics sinks, name=3DNumAllSinks, type=3Djava.lang.Integer, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for = snapshot stats, name=3DSnapshotNumOps, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for = snapshot stats, name=3DSnapshotAvgTime, type=3Djava.lang.Double, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for = publishing stats, name=3DPublishNumOps, type=3Djava.lang.Long, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for = publishing stats, name=3DPublishAvgTime, type=3Djava.lang.Double, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DDropped updates by all = sinks, name=3DDroppedPubAll, type=3Djava.lang.Long, read-only, = descriptor=3D{}]]=0A= 2015-02-21 19:01:17,948 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 19:01:17,948 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DMetricsSystem,sub=3DStats=0A= 2015-02-21 19:01:17,948 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = MetricsSystem,sub=3DStats registered.=0A= 2015-02-21 19:01:17,949 INFO [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot = period at 10 second(s).=0A= 2015-02-21 19:01:17,949 INFO [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics = system started=0A= 2015-02-21 19:01:17,949 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 19:01:17,949 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 19:01:17,949 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 19:01:17,951 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 19:01:17,951 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D27=0A= 2015-02-21 19:01:17,951 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 19:01:17,952 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DProcess name, = name=3Dtag.ProcessName, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DSession ID, = name=3Dtag.SessionId, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLocal hostname, = name=3Dtag.Hostname, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNon-heap memory used = in MB, name=3DMemNonHeapUsedM, type=3Djava.lang.Float, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNon-heap memory = committed in MB, name=3DMemNonHeapCommittedM, type=3Djava.lang.Float, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNon-heap memory max in = MB, name=3DMemNonHeapMaxM, type=3Djava.lang.Float, read-only, = descriptor=3D{}], javax.management.MBeanAttributeInfo[description=3DHeap = memory used in MB, name=3DMemHeapUsedM, type=3Djava.lang.Float, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DHeap memory committed = in MB, name=3DMemHeapCommittedM, type=3Djava.lang.Float, read-only, = descriptor=3D{}], javax.management.MBeanAttributeInfo[description=3DHeap = memory max in MB, name=3DMemHeapMaxM, type=3Djava.lang.Float, read-only, = descriptor=3D{}], javax.management.MBeanAttributeInfo[description=3DMax = memory size in MB, name=3DMemMaxM, type=3Djava.lang.Float, read-only, = descriptor=3D{}], javax.management.MBeanAttributeInfo[description=3DGC = Count for PS Scavenge, name=3DGcCountPS Scavenge, type=3Djava.lang.Long, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DGC Time for PS = Scavenge, name=3DGcTimeMillisPS Scavenge, type=3Djava.lang.Long, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DGC Count for PS = MarkSweep, name=3DGcCountPS MarkSweep, type=3Djava.lang.Long, read-only, = descriptor=3D{}], javax.management.MBeanAttributeInfo[description=3DGC = Time for PS MarkSweep, name=3DGcTimeMillisPS MarkSweep, = type=3Djava.lang.Long, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DTotal GC count, = name=3DGcCount, type=3Djava.lang.Long, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DTotal GC time in = milliseconds, name=3DGcTimeMillis, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of new threads, = name=3DThreadsNew, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of runnable = threads, name=3DThreadsRunnable, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of blocked = threads, name=3DThreadsBlocked, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of waiting = threads, name=3DThreadsWaiting, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of timed = waiting threads, name=3DThreadsTimedWaiting, type=3Djava.lang.Integer, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of terminated = threads, name=3DThreadsTerminated, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DTotal number of fatal = log events, name=3DLogFatal, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DTotal number of error = log events, name=3DLogError, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DTotal number of = warning log events, name=3DLogWarn, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DTotal number of info = log events, name=3DLogInfo, type=3Djava.lang.Long, read-only, = descriptor=3D{}]]=0A= 2015-02-21 19:01:17,952 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 19:01:17,952 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DJvmMetrics=0A= 2015-02-21 19:01:17,952 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = JvmMetrics registered.=0A= 2015-02-21 19:01:17,952 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Registered source = JvmMetrics=0A= 2015-02-21 19:01:17,952 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 19:01:17,952 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 19:01:17,952 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 19:01:17,953 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 19:01:17,953 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D20=0A= 2015-02-21 19:01:17,953 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 19:01:17,953 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLocal hostname, = name=3Dtag.Hostname, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DJobsSubmitted, = name=3DJobsSubmitted, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DJobsCompleted, = name=3DJobsCompleted, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DJobsFailed, = name=3DJobsFailed, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DJobsKilled, = name=3DJobsKilled, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DJobsPreparing, = name=3DJobsPreparing, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DJobsRunning, = name=3DJobsRunning, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMapsLaunched, = name=3DMapsLaunched, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMapsCompleted, = name=3DMapsCompleted, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMapsFailed, = name=3DMapsFailed, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMapsKilled, = name=3DMapsKilled, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMapsRunning, = name=3DMapsRunning, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMapsWaiting, = name=3DMapsWaiting, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DReducesLaunched, = name=3DReducesLaunched, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DReducesCompleted, = name=3DReducesCompleted, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DReducesFailed, = name=3DReducesFailed, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DReducesKilled, = name=3DReducesKilled, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DReducesRunning, = name=3DReducesRunning, type=3Djava.lang.Integer, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DReducesWaiting, = name=3DReducesWaiting, type=3Djava.lang.Integer, read-only, = descriptor=3D{}]]=0A= 2015-02-21 19:01:17,954 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 19:01:17,954 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DMRAppMetrics=0A= 2015-02-21 19:01:17,954 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = MRAppMetrics registered.=0A= 2015-02-21 19:01:17,954 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Registered source = MRAppMetrics=0A= 2015-02-21 19:01:17,954 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 19:01:17,954 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 19:01:17,954 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 19:01:17,954 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 19:01:17,954 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D8=0A= 2015-02-21 19:01:17,954 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 19:01:17,954 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLocal hostname, = name=3Dtag.Hostname, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for rate = of successful kerberos logins and latency (milliseconds), = name=3DLoginSuccessNumOps, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for rate = of successful kerberos logins and latency (milliseconds), = name=3DLoginSuccessAvgTime, type=3Djava.lang.Double, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for rate = of failed kerberos logins and latency (milliseconds), = name=3DLoginFailureNumOps, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for rate = of failed kerberos logins and latency (milliseconds), = name=3DLoginFailureAvgTime, type=3Djava.lang.Double, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for = getGroups, name=3DGetGroupsNumOps, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for = getGroups, name=3DGetGroupsAvgTime, type=3Djava.lang.Double, read-only, = descriptor=3D{}]]=0A= 2015-02-21 19:01:17,954 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 19:01:17,955 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DUgiMetrics=0A= 2015-02-21 19:01:17,955 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = UgiMetrics registered.=0A= 2015-02-21 19:01:17,955 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Registered source = UgiMetrics=0A= 2015-02-21 19:01:17,956 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DMetricsSystem,sub=3DControl=0A= 2015-02-21 19:01:17,957 DEBUG [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_INIT=0A= 2015-02-21 19:01:17,958 DEBUG [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: startJobs: = parent=3D/user/cloudera/.staging child=3Djob_1424550134651_0002=0A= 2015-02-21 19:01:17,961 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token = for job_1424550134651_0002 to jobTokenSecretManager=0A= 2015-02-21 19:01:17,974 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #7=0A= 2015-02-21 19:01:17,975 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #7=0A= 2015-02-21 19:01:17,975 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:17,984 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #8=0A= 2015-02-21 19:01:17,986 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #8=0A= 2015-02-21 19:01:17,986 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getBlockLocations took 2ms=0A= 2015-02-21 19:01:18,024 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = newInfo =3D LocatedBlocks{=0A= fileLength=3D151=0A= underConstruction=3Dfalse=0A= = blocks=3D[LocatedBlock{BP-268700609-192.168.2.253-1419532004456:blk_10737= 54561_13737; getBlockSize()=3D151; corrupt=3Dfalse; offset=3D0; = locs=3D[192.168.2.253:50010, 192.168.2.251:50010, 192.168.2.252:50010]}]=0A= = lastLocatedBlock=3DLocatedBlock{BP-268700609-192.168.2.253-1419532004456:= blk_1073754561_13737; getBlockSize()=3D151; corrupt=3Dfalse; offset=3D0; = locs=3D[192.168.2.253:50010, 192.168.2.251:50010, 192.168.2.252:50010]}=0A= isLastBlockComplete=3Dtrue}=0A= 2015-02-21 19:01:18,028 DEBUG [main] org.apache.hadoop.hdfs.DFSClient: = Connecting to datanode 192.168.2.253:50010=0A= 2015-02-21 19:01:18,037 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #9=0A= 2015-02-21 19:01:18,037 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #9=0A= 2015-02-21 19:01:18,037 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getServerDefaults took 1ms=0A= 2015-02-21 19:01:18,044 DEBUG [main] = org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient:= SASL client skipping handshake in unsecured configuration for addr =3D = /192.168.2.253, datanodeId =3D 192.168.2.253:50010=0A= 2015-02-21 19:01:18,096 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing = job_1424550134651_0002 because: not enabled; too many reduces;=0A= 2015-02-21 19:01:18,115 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job = job_1424550134651_0002 =3D 5343207. Number of splits =3D 5=0A= 2015-02-21 19:01:18,116 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces = for job job_1424550134651_0002 =3D 4=0A= 2015-02-21 19:01:18,116 INFO [main] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: = job_1424550134651_0002Job Transitioned from NEW to INITED=0A= 2015-02-21 19:01:18,118 INFO [main] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching = normal, non-uberized, multi-container job job_1424550134651_0002.=0A= 2015-02-21 19:01:18,118 DEBUG [main] org.apache.hadoop.yarn.ipc.YarnRPC: = Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC=0A= 2015-02-21 19:01:18,119 DEBUG [main] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc server for protocol interface = org.apache.hadoop.mapreduce.v2.api.MRClientProtocol with 1 handlers=0A= 2015-02-21 19:01:18,151 INFO [main] = org.apache.hadoop.ipc.CallQueueManager: Using callQueue class = java.util.concurrent.LinkedBlockingQueue=0A= 2015-02-21 19:01:18,151 DEBUG [main] org.apache.hadoop.ipc.Server: TOKEN = authentication enabled for secret manager=0A= 2015-02-21 19:01:18,151 DEBUG [main] org.apache.hadoop.ipc.Server: = Server accepts auth methods:[TOKEN, SIMPLE]=0A= 2015-02-21 19:01:18,162 INFO [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 59910=0A= 2015-02-21 19:01:18,164 DEBUG [main] = org.apache.hadoop.ipc.metrics.RpcMetrics: Initialized = MetricsRegistry{info=3DMetricsInfoImpl{name=3Drpc, description=3Drpc}, = tags=3D[MetricsTag{info=3DMetricsInfoImpl{name=3Dport, description=3DRPC = port}, value=3D59910}], metrics=3D[]}=0A= 2015-02-21 19:01:18,165 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterLong = org.apache.hadoop.ipc.metrics.RpcMetrics.receivedBytes with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of received bytes], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:18,165 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterLong = org.apache.hadoop.ipc.metrics.RpcMetrics.sentBytes with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of sent bytes], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:18,165 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRate = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcQueueTime with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Queue time], about=3D, type=3DDEFAULT, always=3Dfalse, = sampleName=3DOps)=0A= 2015-02-21 19:01:18,166 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRate = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcProcessingTime with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Processsing time], about=3D, type=3DDEFAULT, always=3Dfalse, = sampleName=3DOps)=0A= 2015-02-21 19:01:18,166 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthenticationFailures with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authentication failures], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:18,166 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthenticationSuccesses with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authentication successes], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:18,166 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthorizationFailures with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authorization failures], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:18,167 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthorizationSuccesses with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authorization sucesses], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:18,167 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: method public int = org.apache.hadoop.ipc.metrics.RpcMetrics.numOpenConnections() with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of open connections], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:18,169 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: method public int = org.apache.hadoop.ipc.metrics.RpcMetrics.callQueueLength() with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Length of the call queue], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:18,169 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = RpcActivityForPort59910, Aggregate RPC metrics=0A= 2015-02-21 19:01:18,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 19:01:18,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 19:01:18,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 19:01:18,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 19:01:18,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D15=0A= 2015-02-21 19:01:18,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 19:01:18,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DRPC port, = name=3Dtag.port, type=3Djava.lang.String, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLocal hostname, = name=3Dtag.Hostname, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of received = bytes, name=3DReceivedBytes, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of sent bytes, = name=3DSentBytes, type=3Djava.lang.Long, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for = queue time, name=3DRpcQueueTimeNumOps, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for queue = time, name=3DRpcQueueTimeAvgTime, type=3Djava.lang.Double, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for = processsing time, name=3DRpcProcessingTimeNumOps, type=3Djava.lang.Long, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for = processsing time, name=3DRpcProcessingTimeAvgTime, = type=3Djava.lang.Double, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authentication failures, name=3DRpcAuthenticationFailures, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authentication successes, name=3DRpcAuthenticationSuccesses, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authorization failures, name=3DRpcAuthorizationFailures, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authorization sucesses, name=3DRpcAuthorizationSuccesses, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of open = connections, name=3DNumOpenConnections, type=3Djava.lang.Integer, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLength of the call = queue, name=3DCallQueueLength, type=3Djava.lang.Integer, read-only, = descriptor=3D{}]]=0A= 2015-02-21 19:01:18,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 19:01:18,170 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DRpcActivityForPort59910=0A= 2015-02-21 19:01:18,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = RpcActivityForPort59910 registered.=0A= 2015-02-21 19:01:18,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Registered source = RpcActivityForPort59910=0A= 2015-02-21 19:01:18,172 DEBUG [main] = org.apache.hadoop.ipc.metrics.RpcDetailedMetrics: = MetricsInfoImpl{name=3Drpcdetailed, description=3Drpcdetailed}=0A= 2015-02-21 19:01:18,173 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRates = org.apache.hadoop.ipc.metrics.RpcDetailedMetrics.rates with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:18,177 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = RpcDetailedActivityForPort59910, Per method RPC metrics=0A= 2015-02-21 19:01:18,177 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 19:01:18,177 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 19:01:18,177 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 19:01:18,177 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 19:01:18,177 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D3=0A= 2015-02-21 19:01:18,177 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 19:01:18,177 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DRPC port, = name=3Dtag.port, type=3Djava.lang.String, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLocal hostname, = name=3Dtag.Hostname, type=3Djava.lang.String, read-only, = descriptor=3D{}]]=0A= 2015-02-21 19:01:18,177 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 19:01:18,177 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DRpcDetailedActivityForPort59910=0A= 2015-02-21 19:01:18,177 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = RpcDetailedActivityForPort59910 registered.=0A= 2015-02-21 19:01:18,177 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Registered source = RpcDetailedActivityForPort59910=0A= 2015-02-21 19:01:18,185 DEBUG [main] org.apache.hadoop.ipc.Server: = RpcKind =3D RPC_PROTOCOL_BUFFER Protocol Name =3D = org.apache.hadoop.ipc.ProtocolMetaInfoPB version=3D1 = ProtocolImpl=3Dorg.apache.hadoop.ipc.protobuf.ProtocolInfoProtos$Protocol= InfoService$2 protocolClass=3Dorg.apache.hadoop.ipc.ProtocolMetaInfoPB=0A= 2015-02-21 19:01:18,185 DEBUG [main] org.apache.hadoop.ipc.Server: = RpcKind =3D RPC_PROTOCOL_BUFFER Protocol Name =3D = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB version=3D1 = ProtocolImpl=3Dorg.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProt= ocolService$2 = protocolClass=3Dorg.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB=0A= 2015-02-21 19:01:18,186 INFO [main] = org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding = protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the = server=0A= 2015-02-21 19:01:18,186 DEBUG [main] org.apache.hadoop.ipc.Server: = RpcKind =3D RPC_PROTOCOL_BUFFER Protocol Name =3D = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB version=3D1 = ProtocolImpl=3Dorg.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProt= ocolService$2 = protocolClass=3Dorg.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB=0A= 2015-02-21 19:01:18,186 INFO [IPC Server Responder] = org.apache.hadoop.ipc.Server: IPC Server Responder: starting=0A= 2015-02-21 19:01:18,187 INFO [main] = org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated = MRClientService at hadoop0.rdpratti.com/192.168.2.253:59910=0A= 2015-02-21 19:01:18,188 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: starting=0A= 2015-02-21 19:01:18,195 INFO [IPC Server listener on 59910] = org.apache.hadoop.ipc.Server: IPC Server listener on 59910: starting=0A= 2015-02-21 19:01:18,277 INFO [main] org.mortbay.log: Logging to = org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via = org.mortbay.log.Slf4jLog=0A= 2015-02-21 19:01:18,277 DEBUG [main] org.mortbay.log: = filterNameMap=3D{NoCacheFilter=3DNoCacheFilter}=0A= 2015-02-21 19:01:18,277 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15)]=0A= 2015-02-21 19:01:18,278 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,278 DEBUG [main] org.mortbay.log: = servletPathMap=3Dnull=0A= 2015-02-21 19:01:18,278 DEBUG [main] org.mortbay.log: = servletNameMap=3Dnull=0A= 2015-02-21 19:01:18,280 DEBUG [main] org.mortbay.log: Container = Server@458ba94d + org.mortbay.thread.QueuedThreadPool@29bd3793 as = threadpool=0A= 2015-02-21 19:01:18,283 INFO [main] = org.apache.hadoop.http.HttpRequestLog: Http request log for = http.requests.mapreduce is not defined=0A= 2015-02-21 19:01:18,283 DEBUG [main] org.mortbay.log: Container = Server@458ba94d + ContextHandlerCollection@3d85a0b9 as handler=0A= 2015-02-21 19:01:18,284 DEBUG [main] org.mortbay.log: Container = ContextHandlerCollection@3d85a0b9 + = org.mortbay.jetty.webapp.WebAppContext@ffaf13d{/,jar:file:/opt/cloudera/p= arcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.0.= jar!/webapps/mapreduce} as handler=0A= 2015-02-21 19:01:18,284 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + NoCacheFilter as filter=0A= 2015-02-21 19:01:18,284 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (F=3DNoCacheFilter,[/*],[],15) as filterMapping=0A= 2015-02-21 19:01:18,284 DEBUG [main] org.mortbay.log: Container = SecurityHandler@60fd097b + ServletHandler@23f3e3fd as handler=0A= 2015-02-21 19:01:18,284 DEBUG [main] org.mortbay.log: Container = SessionHandler@4799bfc + SecurityHandler@60fd097b as handler=0A= 2015-02-21 19:01:18,284 DEBUG [main] org.mortbay.log: Container = SessionHandler@4799bfc + = org.mortbay.jetty.servlet.HashSessionManager@4befbfaf as sessionManager=0A= 2015-02-21 19:01:18,284 DEBUG [main] org.mortbay.log: Container = org.mortbay.jetty.webapp.WebAppContext@ffaf13d{/,jar:file:/opt/cloudera/p= arcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.0.= jar!/webapps/mapreduce} + SessionHandler@4799bfc as handler=0A= 2015-02-21 19:01:18,284 DEBUG [main] org.mortbay.log: Container = org.mortbay.jetty.webapp.WebAppContext@ffaf13d{/,jar:file:/opt/cloudera/p= arcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.0.= jar!/webapps/mapreduce} + ErrorPageErrorHandler@6911a11b as error=0A= 2015-02-21 19:01:18,284 DEBUG [main] org.mortbay.log: Container = ContextHandlerCollection@3d85a0b9 + = org.mortbay.jetty.servlet.Context@4682981{/static,null} as handler=0A= 2015-02-21 19:01:18,284 DEBUG [main] org.mortbay.log: Container = org.mortbay.jetty.servlet.Context@4682981{/static,null} + = ServletHandler@527cd669 as handler=0A= 2015-02-21 19:01:18,295 DEBUG [main] org.mortbay.log: Container = ServletHandler@527cd669 + = org.mortbay.jetty.servlet.DefaultServlet-1020722559 as servlet=0A= 2015-02-21 19:01:18,296 DEBUG [main] org.mortbay.log: Container = ServletHandler@527cd669 + = (S=3Dorg.mortbay.jetty.servlet.DefaultServlet-1020722559,[/*]) as = servletMapping=0A= 2015-02-21 19:01:18,296 DEBUG [main] org.mortbay.log: = filterNameMap=3Dnull=0A= 2015-02-21 19:01:18,296 DEBUG [main] org.mortbay.log: pathFilters=3Dnull=0A= 2015-02-21 19:01:18,297 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,297 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/*=3Dorg.mortbay.jetty.servlet.DefaultServlet-102072255= 9}=0A= 2015-02-21 19:01:18,297 DEBUG [main] org.mortbay.log: = servletNameMap=3D{org.mortbay.jetty.servlet.DefaultServlet-1020722559=3Do= rg.mortbay.jetty.servlet.DefaultServlet-1020722559}=0A= 2015-02-21 19:01:18,297 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + safety as filter=0A= 2015-02-21 19:01:18,297 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (F=3Dsafety,[/*],[],15) as filterMapping=0A= 2015-02-21 19:01:18,297 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter}=0A= 2015-02-21 19:01:18,297 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15)]=0A= 2015-02-21 19:01:18,297 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,297 DEBUG [main] org.mortbay.log: = servletPathMap=3Dnull=0A= 2015-02-21 19:01:18,298 DEBUG [main] org.mortbay.log: = servletNameMap=3Dnull=0A= 2015-02-21 19:01:18,298 DEBUG [main] org.mortbay.log: Container = ServletHandler@527cd669 + safety as filter=0A= 2015-02-21 19:01:18,298 DEBUG [main] org.mortbay.log: Container = ServletHandler@527cd669 + (F=3Dsafety,[/*],[],15) as filterMapping=0A= 2015-02-21 19:01:18,298 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety}=0A= 2015-02-21 19:01:18,298 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3Dsafety,[/*],[],15)]=0A= 2015-02-21 19:01:18,298 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,298 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/*=3Dorg.mortbay.jetty.servlet.DefaultServlet-102072255= 9}=0A= 2015-02-21 19:01:18,298 DEBUG [main] org.mortbay.log: = servletNameMap=3D{org.mortbay.jetty.servlet.DefaultServlet-1020722559=3Do= rg.mortbay.jetty.servlet.DefaultServlet-1020722559}=0A= 2015-02-21 19:01:18,298 INFO [main] org.apache.hadoop.http.HttpServer2: = Added global filter 'safety' = (class=3Dorg.apache.hadoop.http.HttpServer2$QuotingInputFilter)=0A= 2015-02-21 19:01:18,305 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + AM_PROXY_FILTER as filter=0A= 2015-02-21 19:01:18,305 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15) as = filterMapping=0A= 2015-02-21 19:01:18,305 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,305 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15)]=0A= 2015-02-21 19:01:18,305 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,305 DEBUG [main] org.mortbay.log: = servletPathMap=3Dnull=0A= 2015-02-21 19:01:18,305 DEBUG [main] org.mortbay.log: = servletNameMap=3Dnull=0A= 2015-02-21 19:01:18,305 INFO [main] org.apache.hadoop.http.HttpServer2: = Added filter AM_PROXY_FILTER = (class=3Dorg.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to = context mapreduce=0A= 2015-02-21 19:01:18,306 DEBUG [main] org.mortbay.log: Container = ServletHandler@527cd669 + AM_PROXY_FILTER as filter=0A= 2015-02-21 19:01:18,306 DEBUG [main] org.mortbay.log: Container = ServletHandler@527cd669 + (F=3DAM_PROXY_FILTER,[/*],[],15) as = filterMapping=0A= 2015-02-21 19:01:18,306 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,306 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3Dsafety,[/*],[],15), (F=3DAM_PROXY_FILTER,[/*],[],15)]=0A= 2015-02-21 19:01:18,306 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,306 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/*=3Dorg.mortbay.jetty.servlet.DefaultServlet-102072255= 9}=0A= 2015-02-21 19:01:18,306 DEBUG [main] org.mortbay.log: = servletNameMap=3D{org.mortbay.jetty.servlet.DefaultServlet-1020722559=3Do= rg.mortbay.jetty.servlet.DefaultServlet-1020722559}=0A= 2015-02-21 19:01:18,306 INFO [main] org.apache.hadoop.http.HttpServer2: = Added filter AM_PROXY_FILTER = (class=3Dorg.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to = context static=0A= 2015-02-21 19:01:18,307 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + stacks as servlet=0A= 2015-02-21 19:01:18,307 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (S=3Dstacks,[/stacks]) as servletMapping=0A= 2015-02-21 19:01:18,307 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,307 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15)]=0A= 2015-02-21 19:01:18,307 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,307 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/stacks=3Dstacks}=0A= 2015-02-21 19:01:18,307 DEBUG [main] org.mortbay.log: = servletNameMap=3D{stacks=3Dstacks}=0A= 2015-02-21 19:01:18,307 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (F=3DAM_PROXY_FILTER,[/stacks],[],15) as = filterMapping=0A= 2015-02-21 19:01:18,307 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,307 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15)]=0A= 2015-02-21 19:01:18,307 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,307 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/stacks=3Dstacks}=0A= 2015-02-21 19:01:18,307 DEBUG [main] org.mortbay.log: = servletNameMap=3D{stacks=3Dstacks}=0A= 2015-02-21 19:01:18,308 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + logLevel as servlet=0A= 2015-02-21 19:01:18,308 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (S=3DlogLevel,[/logLevel]) as servletMapping=0A= 2015-02-21 19:01:18,308 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,308 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15)]=0A= 2015-02-21 19:01:18,308 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,308 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 19:01:18,308 DEBUG [main] org.mortbay.log: = servletNameMap=3D{logLevel=3DlogLevel, stacks=3Dstacks}=0A= 2015-02-21 19:01:18,308 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (F=3DAM_PROXY_FILTER,[/logLevel],[],15) as = filterMapping=0A= 2015-02-21 19:01:18,308 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,308 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15)]=0A= 2015-02-21 19:01:18,309 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,309 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 19:01:18,309 DEBUG [main] org.mortbay.log: = servletNameMap=3D{logLevel=3DlogLevel, stacks=3Dstacks}=0A= 2015-02-21 19:01:18,309 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + metrics as servlet=0A= 2015-02-21 19:01:18,309 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (S=3Dmetrics,[/metrics]) as servletMapping=0A= 2015-02-21 19:01:18,310 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,310 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15)]=0A= 2015-02-21 19:01:18,310 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,310 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /stacks=3Dstacks, = /logLevel=3DlogLevel}=0A= 2015-02-21 19:01:18,310 DEBUG [main] org.mortbay.log: = servletNameMap=3D{metrics=3Dmetrics, logLevel=3DlogLevel, = stacks=3Dstacks}=0A= 2015-02-21 19:01:18,310 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (F=3DAM_PROXY_FILTER,[/metrics],[],15) as = filterMapping=0A= 2015-02-21 19:01:18,310 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,310 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15)]=0A= 2015-02-21 19:01:18,310 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,310 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /stacks=3Dstacks, = /logLevel=3DlogLevel}=0A= 2015-02-21 19:01:18,310 DEBUG [main] org.mortbay.log: = servletNameMap=3D{metrics=3Dmetrics, logLevel=3DlogLevel, = stacks=3Dstacks}=0A= 2015-02-21 19:01:18,311 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + jmx as servlet=0A= 2015-02-21 19:01:18,311 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (S=3Djmx,[/jmx]) as servletMapping=0A= 2015-02-21 19:01:18,311 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,311 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15)]=0A= 2015-02-21 19:01:18,311 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,311 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /jmx=3Djmx, /stacks=3Dstacks, = /logLevel=3DlogLevel}=0A= 2015-02-21 19:01:18,311 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = stacks=3Dstacks}=0A= 2015-02-21 19:01:18,311 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (F=3DAM_PROXY_FILTER,[/jmx],[],15) as = filterMapping=0A= 2015-02-21 19:01:18,312 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,312 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15)]=0A= 2015-02-21 19:01:18,312 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,312 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /jmx=3Djmx, /stacks=3Dstacks, = /logLevel=3DlogLevel}=0A= 2015-02-21 19:01:18,312 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = stacks=3Dstacks}=0A= 2015-02-21 19:01:18,312 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + conf as servlet=0A= 2015-02-21 19:01:18,312 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (S=3Dconf,[/conf]) as servletMapping=0A= 2015-02-21 19:01:18,313 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,313 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15)]=0A= 2015-02-21 19:01:18,313 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,313 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /conf=3Dconf, /jmx=3Djmx, = /stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 19:01:18,313 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 19:01:18,313 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (F=3DAM_PROXY_FILTER,[/conf],[],15) as = filterMapping=0A= 2015-02-21 19:01:18,313 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,313 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15), (F=3DAM_PROXY_FILTER,[/conf],[],15)]=0A= 2015-02-21 19:01:18,313 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,313 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /conf=3Dconf, /jmx=3Djmx, = /stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 19:01:18,313 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 19:01:18,313 INFO [main] org.apache.hadoop.http.HttpServer2: = adding path spec: /mapreduce/*=0A= 2015-02-21 19:01:18,313 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (F=3DAM_PROXY_FILTER,[/mapreduce/*],[],15) as = filterMapping=0A= 2015-02-21 19:01:18,313 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,314 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15), (F=3DAM_PROXY_FILTER,[/conf],[],15), = (F=3DAM_PROXY_FILTER,[/mapreduce/*],[],15)]=0A= 2015-02-21 19:01:18,314 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,314 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /conf=3Dconf, /jmx=3Djmx, = /stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 19:01:18,314 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 19:01:18,314 INFO [main] org.apache.hadoop.http.HttpServer2: = adding path spec: /ws/*=0A= 2015-02-21 19:01:18,314 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (F=3DAM_PROXY_FILTER,[/ws/*],[],15) as = filterMapping=0A= 2015-02-21 19:01:18,314 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, NoCacheFilter=3DNoCacheFilter, = AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,314 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15), (F=3DAM_PROXY_FILTER,[/conf],[],15), = (F=3DAM_PROXY_FILTER,[/mapreduce/*],[],15), = (F=3DAM_PROXY_FILTER,[/ws/*],[],15)]=0A= 2015-02-21 19:01:18,314 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,314 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /conf=3Dconf, /jmx=3Djmx, = /stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 19:01:18,314 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 19:01:18,325 DEBUG [main] org.mortbay.log: Container = Server@458ba94d + = HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:0 as connector=0A= 2015-02-21 19:01:18,326 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + guice as filter=0A= 2015-02-21 19:01:18,326 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (F=3Dguice,[/*],[],15) as filterMapping=0A= 2015-02-21 19:01:18,326 DEBUG [main] org.mortbay.log: = filterNameMap=3D{guice=3Dguice, safety=3Dsafety, = NoCacheFilter=3DNoCacheFilter, AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,326 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15), (F=3DAM_PROXY_FILTER,[/conf],[],15), = (F=3DAM_PROXY_FILTER,[/mapreduce/*],[],15), = (F=3DAM_PROXY_FILTER,[/ws/*],[],15), (F=3Dguice,[/*],[],15)]=0A= 2015-02-21 19:01:18,326 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,326 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /conf=3Dconf, /jmx=3Djmx, = /stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 19:01:18,326 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 19:01:18,327 INFO [main] org.apache.hadoop.http.HttpServer2: = Jetty bound to port 51221=0A= 2015-02-21 19:01:18,327 INFO [main] org.mortbay.log: = jetty-6.1.26.cloudera.4=0A= 2015-02-21 19:01:18,339 DEBUG [main] org.mortbay.log: started = org.mortbay.thread.QueuedThreadPool@29bd3793=0A= 2015-02-21 19:01:18,357 DEBUG [main] org.mortbay.log: Thread Context = class loader is: ContextLoader@mapreduce([]) / = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:18,357 DEBUG [main] org.mortbay.log: Parent class = loader is: sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:18,357 DEBUG [main] org.mortbay.log: Parent class = loader is: sun.misc.Launcher$ExtClassLoader@21a722ef=0A= 2015-02-21 19:01:18,358 DEBUG [main] org.mortbay.log: Try = webapp=3Djar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/h= adoop-yarn-common-2.5.0-cdh5.3.0.jar!/webapps/mapreduce, exists=3Dtrue, = directory=3Dtrue=0A= 2015-02-21 19:01:18,359 DEBUG [main] org.mortbay.log: Created temp dir = /tmp/Jetty_0_0_0_0_51221_mapreduce____6xjp50 for = org.mortbay.jetty.webapp.WebAppContext@ffaf13d{/,jar:file:/opt/cloudera/p= arcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.0.= jar!/webapps/mapreduce}=0A= 2015-02-21 19:01:18,360 INFO [main] org.mortbay.log: Extract = jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yar= n-common-2.5.0-cdh5.3.0.jar!/webapps/mapreduce to = /tmp/Jetty_0_0_0_0_51221_mapreduce____6xjp50/webapp=0A= 2015-02-21 19:01:18,360 DEBUG [main] org.mortbay.log: Extract = jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yar= n-common-2.5.0-cdh5.3.0.jar!/webapps/mapreduce to = /tmp/Jetty_0_0_0_0_51221_mapreduce____6xjp50/webapp=0A= 2015-02-21 19:01:18,360 DEBUG [main] org.mortbay.log: Extracting entry = =3D webapps/mapreduce from jar = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-co= mmon-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,360 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/services/=0A= 2015-02-21 19:01:18,360 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/applicationhistory/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/cluster/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/jobhistory/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/node/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/proxy/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/css/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/js/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jt/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/test/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/yarn/=0A= 2015-02-21 19:01:18,361 DEBUG [main] org.mortbay.log: Skipping entry: = org/=0A= 2015-02-21 19:01:18,362 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/=0A= 2015-02-21 19:01:18,362 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/=0A= 2015-02-21 19:01:18,362 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/=0A= 2015-02-21 19:01:18,362 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/=0A= 2015-02-21 19:01:18,362 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/=0A= 2015-02-21 19:01:18,362 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/=0A= 2015-02-21 19:01:18,362 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/log/=0A= 2015-02-21 19:01:18,362 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/=0A= 2015-02-21 19:01:18,362 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/util/=0A= 2015-02-21 19:01:18,362 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/=0A= 2015-02-21 19:01:18,362 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/=0A= 2015-02-21 19:01:18,362 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/admin/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/service/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/client/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/impl/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/impl/pb/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/impl/pb/client/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/impl/pb/service/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/security/=0A= 2015-02-21 19:01:18,363 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/ipc/=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/timeline/=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/resource/=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/impl/=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/impl/pb/=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factory/=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factory/providers/=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/services/org.apache.hadoop.security.SecurityInfo=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/services/org.apache.hadoop.security.token.TokenIdentifier=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/services/org.apache.hadoop.security.token.TokenRenewer=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/NOTICE=0A= 2015-02-21 19:01:18,364 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/LICENSE=0A= 2015-02-21 19:01:18,365 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/DEPENDENCIES=0A= 2015-02-21 19:01:18,365 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/applicationhistory/.keep=0A= 2015-02-21 19:01:18,365 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/cluster/.keep=0A= 2015-02-21 19:01:18,365 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/jobhistory/.keep=0A= 2015-02-21 19:01:18,369 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/node/.keep=0A= 2015-02-21 19:01:18,369 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/proxy/.keep=0A= 2015-02-21 19:01:18,369 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/busy.gif=0A= 2015-02-21 19:01:18,369 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/css/demo_page.css=0A= 2015-02-21 19:01:18,369 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/css/demo_table.css=0A= 2015-02-21 19:01:18,369 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/css/jui-dt.css=0A= 2015-02-21 19:01:18,370 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/Sorting icons.psd=0A= 2015-02-21 19:01:18,370 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/back_disabled.jpg=0A= 2015-02-21 19:01:18,370 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/back_enabled.jpg=0A= 2015-02-21 19:01:18,370 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/favicon.ico=0A= 2015-02-21 19:01:18,370 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/forward_disabled.jpg=0A= 2015-02-21 19:01:18,370 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/forward_enabled.jpg=0A= 2015-02-21 19:01:18,370 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/sort_asc.png=0A= 2015-02-21 19:01:18,370 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/sort_asc_disabled.png=0A= 2015-02-21 19:01:18,370 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/sort_both.png=0A= 2015-02-21 19:01:18,371 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/sort_desc.png=0A= 2015-02-21 19:01:18,371 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/images/sort_desc_disabled.png=0A= 2015-02-21 19:01:18,371 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/dt-1.9.4/js/jquery.dataTables.min.js.gz=0A= 2015-02-21 19:01:18,371 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/hadoop-st.png=0A= 2015-02-21 19:01:18,372 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/jquery-1.8.2.min.js.gz=0A= 2015-02-21 19:01:18,372 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/jquery-ui-1.9.1.custom.min.js.gz=0A= 2015-02-21 19:01:18,373 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_flat_0_aaaaaa_40x100= .png=0A= 2015-02-21 19:01:18,373 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_flat_75_ffffff_40x10= 0.png=0A= 2015-02-21 19:01:18,373 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_glass_55_fbf9ee_1x40= 0.png=0A= 2015-02-21 19:01:18,373 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_glass_65_ffffff_1x40= 0.png=0A= 2015-02-21 19:01:18,373 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_glass_75_dadada_1x40= 0.png=0A= 2015-02-21 19:01:18,373 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_glass_75_e6e6e6_1x40= 0.png=0A= 2015-02-21 19:01:18,373 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_glass_95_fef1ec_1x40= 0.png=0A= 2015-02-21 19:01:18,373 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-bg_highlight-soft_75_cc= cccc_1x100.png=0A= 2015-02-21 19:01:18,373 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-icons_222222_256x240.pn= g=0A= 2015-02-21 19:01:18,373 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-icons_2e83ff_256x240.pn= g=0A= 2015-02-21 19:01:18,374 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-icons_454545_256x240.pn= g=0A= 2015-02-21 19:01:18,374 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-icons_888888_256x240.pn= g=0A= 2015-02-21 19:01:18,374 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/images/ui-icons_cd0a0a_256x240.pn= g=0A= 2015-02-21 19:01:18,374 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jquery/themes-1.9.1/base/jquery-ui.css=0A= 2015-02-21 19:01:18,375 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/jt/jquery.jstree.js.gz=0A= 2015-02-21 19:01:18,375 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/yarn.css=0A= 2015-02-21 19:01:18,375 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/static/yarn.dt.plugins.js=0A= 2015-02-21 19:01:18,375 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/test/.keep=0A= 2015-02-21 19:01:18,375 DEBUG [main] org.mortbay.log: Skipping entry: = webapps/yarn/.keep=0A= 2015-02-21 19:01:18,376 DEBUG [main] org.mortbay.log: Skipping entry: = yarn-default.xml=0A= 2015-02-21 19:01:18,376 DEBUG [main] org.mortbay.log: Skipping entry: = yarn-version-info.properties=0A= 2015-02-21 19:01:18,376 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/package-info.class=0A= 2015-02-21 19:01:18,376 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/package-info.class=0A= 2015-02-21 19:01:18,376 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/TextPage.class=0A= 2015-02-21 19:01:18,377 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/TextView.class=0A= 2015-02-21 19:01:18,377 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/HtmlPage$_.class=0A= 2015-02-21 19:01:18,377 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/HtmlPage$Page.class=0A= 2015-02-21 19:01:18,377 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/HtmlPage.class=0A= 2015-02-21 19:01:18,377 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/LipsumBlock.class=0A= 2015-02-21 19:01:18,377 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/HtmlBlock$Block.class=0A= 2015-02-21 19:01:18,377 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/HtmlBlock.class=0A= 2015-02-21 19:01:18,377 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/TwoColumnLayout.class=0A= 2015-02-21 19:01:18,377 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/JQueryUI.class=0A= 2015-02-21 19:01:18,378 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/InfoBlock.class=0A= 2015-02-21 19:01:18,378 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/TwoColumnCssLayout.class=0A= 2015-02-21 19:01:18,378 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/ErrorPage.class=0A= 2015-02-21 19:01:18,378 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/FooterBlock.class=0A= 2015-02-21 19:01:18,378 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/Html.class=0A= 2015-02-21 19:01:18,378 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/HeaderBlock.class=0A= 2015-02-21 19:01:18,378 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/NavBlock.class=0A= 2015-02-21 19:01:18,378 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/view/DefaultPage.class=0A= 2015-02-21 19:01:18,379 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/package-info.class=0A= 2015-02-21 19:01:18,379 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletImpl$EOpt.class=0A= 2015-02-21 19:01:18,379 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletImpl$EImp.class=0A= 2015-02-21 19:01:18,379 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletImpl$Generic.class=0A= 2015-02-21 19:01:18,379 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletImpl.class=0A= 2015-02-21 19:01:18,379 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Shape.class=0A= 2015-02-21 19:01:18,379 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Dir.class=0A= 2015-02-21 19:01:18,379 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Media.class=0A= 2015-02-21 19:01:18,380 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$LinkType.class=0A= 2015-02-21 19:01:18,380 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Method.class=0A= 2015-02-21 19:01:18,380 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$InputType.class=0A= 2015-02-21 19:01:18,380 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$ButtonType.class=0A= 2015-02-21 19:01:18,380 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Scope.class=0A= 2015-02-21 19:01:18,380 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Element.class=0A= 2015-02-21 19:01:18,380 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_.class=0A= 2015-02-21 19:01:18,380 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Child.class=0A= 2015-02-21 19:01:18,380 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Script.class=0A= 2015-02-21 19:01:18,380 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Object.class=0A= 2015-02-21 19:01:18,380 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$HeadMisc.class=0A= 2015-02-21 19:01:18,380 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Heading.class=0A= 2015-02-21 19:01:18,381 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Listing.class=0A= 2015-02-21 19:01:18,381 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Preformatted.class=0A= 2015-02-21 19:01:18,381 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$CoreAttrs.class=0A= 2015-02-21 19:01:18,381 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$I18nAttrs.class=0A= 2015-02-21 19:01:18,381 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$EventsAttrs.class=0A= 2015-02-21 19:01:18,381 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Attrs.class=0A= 2015-02-21 19:01:18,381 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_FontSize.class=0A= 2015-02-21 19:01:18,381 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_FontStyle.class=0A= 2015-02-21 19:01:18,381 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$FontStyle.class=0A= 2015-02-21 19:01:18,381 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Phrase.class=0A= 2015-02-21 19:01:18,381 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_ImgObject.class=0A= 2015-02-21 19:01:18,381 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_SubSup.class=0A= 2015-02-21 19:01:18,381 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Anchor.class=0A= 2015-02-21 19:01:18,381 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_InsDel.class=0A= 2015-02-21 19:01:18,382 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Special.class=0A= 2015-02-21 19:01:18,382 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Special.class=0A= 2015-02-21 19:01:18,382 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Label.class=0A= 2015-02-21 19:01:18,382 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_FormCtrl.class=0A= 2015-02-21 19:01:18,382 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$FormCtrl.class=0A= 2015-02-21 19:01:18,382 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Content.class=0A= 2015-02-21 19:01:18,382 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_RawContent.class=0A= 2015-02-21 19:01:18,382 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$PCData.class=0A= 2015-02-21 19:01:18,382 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Inline.class=0A= 2015-02-21 19:01:18,382 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$I.class=0A= 2015-02-21 19:01:18,382 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$B.class=0A= 2015-02-21 19:01:18,382 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$SMALL.class=0A= 2015-02-21 19:01:18,382 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$EM.class=0A= 2015-02-21 19:01:18,382 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$STRONG.class=0A= 2015-02-21 19:01:18,382 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$DFN.class=0A= 2015-02-21 19:01:18,383 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$CODE.class=0A= 2015-02-21 19:01:18,383 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$SAMP.class=0A= 2015-02-21 19:01:18,383 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$KBD.class=0A= 2015-02-21 19:01:18,383 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$VAR.class=0A= 2015-02-21 19:01:18,383 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$CITE.class=0A= 2015-02-21 19:01:18,383 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$ABBR.class=0A= 2015-02-21 19:01:18,383 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$ACRONYM.class=0A= 2015-02-21 19:01:18,383 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$SUB.class=0A= 2015-02-21 19:01:18,383 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$SUP.class=0A= 2015-02-21 19:01:18,383 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$SPAN.class=0A= 2015-02-21 19:01:18,383 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$BDO.class=0A= 2015-02-21 19:01:18,383 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$BR.class=0A= 2015-02-21 19:01:18,383 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Form.class=0A= 2015-02-21 19:01:18,383 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_FieldSet.class=0A= 2015-02-21 19:01:18,384 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Block.class=0A= 2015-02-21 19:01:18,384 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Block.class=0A= 2015-02-21 19:01:18,384 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Flow.class=0A= 2015-02-21 19:01:18,384 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Body.class=0A= 2015-02-21 19:01:18,384 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$BODY.class=0A= 2015-02-21 19:01:18,384 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$ADDRESS.class=0A= 2015-02-21 19:01:18,384 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$DIV.class=0A= 2015-02-21 19:01:18,384 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$A.class=0A= 2015-02-21 19:01:18,384 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$MAP.class=0A= 2015-02-21 19:01:18,384 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$AREA.class=0A= 2015-02-21 19:01:18,384 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$LINK.class=0A= 2015-02-21 19:01:18,384 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$IMG.class=0A= 2015-02-21 19:01:18,384 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Param.class=0A= 2015-02-21 19:01:18,385 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$OBJECT.class=0A= 2015-02-21 19:01:18,385 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$PARAM.class=0A= 2015-02-21 19:01:18,385 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$HR.class=0A= 2015-02-21 19:01:18,385 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$P.class=0A= 2015-02-21 19:01:18,385 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$H1.class=0A= 2015-02-21 19:01:18,385 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$H2.class=0A= 2015-02-21 19:01:18,385 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$H3.class=0A= 2015-02-21 19:01:18,385 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$H4.class=0A= 2015-02-21 19:01:18,385 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$H5.class=0A= 2015-02-21 19:01:18,385 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$H6.class=0A= 2015-02-21 19:01:18,385 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$PRE.class=0A= 2015-02-21 19:01:18,385 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$Q.class=0A= 2015-02-21 19:01:18,385 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$BLOCKQUOTE.class=0A= 2015-02-21 19:01:18,385 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$INS.class=0A= 2015-02-21 19:01:18,385 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$DEL.class=0A= 2015-02-21 19:01:18,386 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Dl.class=0A= 2015-02-21 19:01:18,386 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$DL.class=0A= 2015-02-21 19:01:18,386 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$DT.class=0A= 2015-02-21 19:01:18,386 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$DD.class=0A= 2015-02-21 19:01:18,386 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Li.class=0A= 2015-02-21 19:01:18,386 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$OL.class=0A= 2015-02-21 19:01:18,386 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$UL.class=0A= 2015-02-21 19:01:18,386 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$LI.class=0A= 2015-02-21 19:01:18,386 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$FORM.class=0A= 2015-02-21 19:01:18,386 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$LABEL.class=0A= 2015-02-21 19:01:18,386 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$INPUT.class=0A= 2015-02-21 19:01:18,386 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Option.class=0A= 2015-02-21 19:01:18,387 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$SELECT.class=0A= 2015-02-21 19:01:18,387 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$OPTGROUP.class=0A= 2015-02-21 19:01:18,387 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$OPTION.class=0A= 2015-02-21 19:01:18,387 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TEXTAREA.class=0A= 2015-02-21 19:01:18,387 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Legend.class=0A= 2015-02-21 19:01:18,387 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$FIELDSET.class=0A= 2015-02-21 19:01:18,387 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$LEGEND.class=0A= 2015-02-21 19:01:18,387 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$BUTTON.class=0A= 2015-02-21 19:01:18,387 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_TableRow.class=0A= 2015-02-21 19:01:18,387 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_TableCol.class=0A= 2015-02-21 19:01:18,387 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Table.class=0A= 2015-02-21 19:01:18,387 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TABLE.class=0A= 2015-02-21 19:01:18,387 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$CAPTION.class=0A= 2015-02-21 19:01:18,388 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$THEAD.class=0A= 2015-02-21 19:01:18,388 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TFOOT.class=0A= 2015-02-21 19:01:18,388 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TBODY.class=0A= 2015-02-21 19:01:18,388 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$COLGROUP.class=0A= 2015-02-21 19:01:18,388 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$COL.class=0A= 2015-02-21 19:01:18,388 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Tr.class=0A= 2015-02-21 19:01:18,388 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TR.class=0A= 2015-02-21 19:01:18,388 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Cell.class=0A= 2015-02-21 19:01:18,388 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TH.class=0A= 2015-02-21 19:01:18,388 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TD.class=0A= 2015-02-21 19:01:18,388 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Head.class=0A= 2015-02-21 19:01:18,388 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$HEAD.class=0A= 2015-02-21 19:01:18,388 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$TITLE.class=0A= 2015-02-21 19:01:18,388 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$BASE.class=0A= 2015-02-21 19:01:18,389 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$META.class=0A= 2015-02-21 19:01:18,389 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$STYLE.class=0A= 2015-02-21 19:01:18,389 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$SCRIPT.class=0A= 2015-02-21 19:01:18,389 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$_Html.class=0A= 2015-02-21 19:01:18,389 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec$HTML.class=0A= 2015-02-21 19:01:18,389 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletSpec.class=0A= 2015-02-21 19:01:18,389 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$HTML.class=0A= 2015-02-21 19:01:18,390 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$SCRIPT.class=0A= 2015-02-21 19:01:18,390 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$STYLE.class=0A= 2015-02-21 19:01:18,390 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$META.class=0A= 2015-02-21 19:01:18,390 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$BASE.class=0A= 2015-02-21 19:01:18,390 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TITLE.class=0A= 2015-02-21 19:01:18,390 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$HEAD.class=0A= 2015-02-21 19:01:18,390 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TD.class=0A= 2015-02-21 19:01:18,391 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TH.class=0A= 2015-02-21 19:01:18,392 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TR.class=0A= 2015-02-21 19:01:18,392 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$COL.class=0A= 2015-02-21 19:01:18,392 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$COLGROUP.class=0A= 2015-02-21 19:01:18,393 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TBODY.class=0A= 2015-02-21 19:01:18,393 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TFOOT.class=0A= 2015-02-21 19:01:18,393 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$THEAD.class=0A= 2015-02-21 19:01:18,393 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$CAPTION.class=0A= 2015-02-21 19:01:18,394 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TABLE.class=0A= 2015-02-21 19:01:18,394 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$BUTTON.class=0A= 2015-02-21 19:01:18,394 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$LEGEND.class=0A= 2015-02-21 19:01:18,395 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$FIELDSET.class=0A= 2015-02-21 19:01:18,396 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$TEXTAREA.class=0A= 2015-02-21 19:01:18,396 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$OPTION.class=0A= 2015-02-21 19:01:18,396 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$OPTGROUP.class=0A= 2015-02-21 19:01:18,396 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$SELECT.class=0A= 2015-02-21 19:01:18,396 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$INPUT.class=0A= 2015-02-21 19:01:18,397 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$LABEL.class=0A= 2015-02-21 19:01:18,397 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$FORM.class=0A= 2015-02-21 19:01:18,398 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$LI.class=0A= 2015-02-21 19:01:18,399 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$UL.class=0A= 2015-02-21 19:01:18,400 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$OL.class=0A= 2015-02-21 19:01:18,400 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$DD.class=0A= 2015-02-21 19:01:18,402 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$DT.class=0A= 2015-02-21 19:01:18,402 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$DL.class=0A= 2015-02-21 19:01:18,402 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$DEL.class=0A= 2015-02-21 19:01:18,403 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$INS.class=0A= 2015-02-21 19:01:18,404 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$BLOCKQUOTE.class=0A= 2015-02-21 19:01:18,404 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$Q.class=0A= 2015-02-21 19:01:18,405 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$PRE.class=0A= 2015-02-21 19:01:18,405 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$H6.class=0A= 2015-02-21 19:01:18,406 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$H5.class=0A= 2015-02-21 19:01:18,406 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$H4.class=0A= 2015-02-21 19:01:18,407 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$H3.class=0A= 2015-02-21 19:01:18,407 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$H2.class=0A= 2015-02-21 19:01:18,408 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$H1.class=0A= 2015-02-21 19:01:18,408 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$P.class=0A= 2015-02-21 19:01:18,409 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$HR.class=0A= 2015-02-21 19:01:18,409 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$PARAM.class=0A= 2015-02-21 19:01:18,409 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$OBJECT.class=0A= 2015-02-21 19:01:18,410 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$IMG.class=0A= 2015-02-21 19:01:18,410 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$LINK.class=0A= 2015-02-21 19:01:18,410 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$AREA.class=0A= 2015-02-21 19:01:18,410 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$MAP.class=0A= 2015-02-21 19:01:18,410 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$A.class=0A= 2015-02-21 19:01:18,411 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$DIV.class=0A= 2015-02-21 19:01:18,412 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$ADDRESS.class=0A= 2015-02-21 19:01:18,412 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$BODY.class=0A= 2015-02-21 19:01:18,412 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$BR.class=0A= 2015-02-21 19:01:18,412 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$BDO.class=0A= 2015-02-21 19:01:18,413 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$SPAN.class=0A= 2015-02-21 19:01:18,413 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$SUP.class=0A= 2015-02-21 19:01:18,414 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$SUB.class=0A= 2015-02-21 19:01:18,414 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$ACRONYM.class=0A= 2015-02-21 19:01:18,415 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$ABBR.class=0A= 2015-02-21 19:01:18,415 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$CITE.class=0A= 2015-02-21 19:01:18,416 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$VAR.class=0A= 2015-02-21 19:01:18,416 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$KBD.class=0A= 2015-02-21 19:01:18,417 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$SAMP.class=0A= 2015-02-21 19:01:18,418 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$CODE.class=0A= 2015-02-21 19:01:18,418 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$DFN.class=0A= 2015-02-21 19:01:18,419 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$STRONG.class=0A= 2015-02-21 19:01:18,419 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$EM.class=0A= 2015-02-21 19:01:18,420 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$SMALL.class=0A= 2015-02-21 19:01:18,420 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$B.class=0A= 2015-02-21 19:01:18,420 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet$I.class=0A= 2015-02-21 19:01:18,421 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/Hamlet.class=0A= 2015-02-21 19:01:18,422 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/hamlet/HamletGen.class=0A= 2015-02-21 19:01:18,422 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/log/package-info.class=0A= 2015-02-21 19:01:18,422 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/log/AggregatedLogsPage.class=0A= 2015-02-21 19:01:18,422 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/log/AggregatedLogsBlock$LogLimits.class=0A= 2015-02-21 19:01:18,422 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/log/AggregatedLogsBlock$1.class=0A= 2015-02-21 19:01:18,422 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/log/AggregatedLogsBlock.class=0A= 2015-02-21 19:01:18,423 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/log/AggregatedLogsNavBlock.class=0A= 2015-02-21 19:01:18,423 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/package-info.class=0A= 2015-02-21 19:01:18,423 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/MyApp$MyController.class=0A= 2015-02-21 19:01:18,423 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/MyApp$MyView.class=0A= 2015-02-21 19:01:18,423 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/MyApp.class=0A= 2015-02-21 19:01:18,423 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/HelloWorld$Hello.class=0A= 2015-02-21 19:01:18,423 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/HelloWorld$HelloView.class=0A= 2015-02-21 19:01:18,423 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/example/HelloWorld.class=0A= 2015-02-21 19:01:18,423 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/DefaultWrapperServlet$1.class=0A= 2015-02-21 19:01:18,423 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/DefaultWrapperServlet.class=0A= 2015-02-21 19:01:18,424 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/View$ViewContext.class=0A= 2015-02-21 19:01:18,424 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/View.class=0A= 2015-02-21 19:01:18,424 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/Params.class=0A= 2015-02-21 19:01:18,424 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/ResponseInfo$Item.class=0A= 2015-02-21 19:01:18,424 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/ResponseInfo.class=0A= 2015-02-21 19:01:18,424 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/SubView.class=0A= 2015-02-21 19:01:18,424 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/Controller$RequestContext.class=0A= 2015-02-21 19:01:18,424 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/Controller.class=0A= 2015-02-21 19:01:18,424 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/ToJSON.class=0A= 2015-02-21 19:01:18,424 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/BadRequestException.class=0A= 2015-02-21 19:01:18,425 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/YarnJacksonJaxbJsonProvider.class=0A= 2015-02-21 19:01:18,425 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/YarnWebParams.class=0A= 2015-02-21 19:01:18,425 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebApp$HTTP.class=0A= 2015-02-21 19:01:18,425 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebApp.class=0A= 2015-02-21 19:01:18,425 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/Router$Dest.class=0A= 2015-02-21 19:01:18,425 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/Router.class=0A= 2015-02-21 19:01:18,425 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/RemoteExceptionData.class=0A= 2015-02-21 19:01:18,425 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/NotFoundException.class=0A= 2015-02-21 19:01:18,426 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/MimeType.class=0A= 2015-02-21 19:01:18,426 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebApps$Builder$ServletStruct.class=0A= 2015-02-21 19:01:18,426 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebApps$Builder$1.class=0A= 2015-02-21 19:01:18,426 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebApps$Builder$2.class=0A= 2015-02-21 19:01:18,426 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebApps$Builder.class=0A= 2015-02-21 19:01:18,426 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebApps.class=0A= 2015-02-21 19:01:18,426 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/Dispatcher$1.class=0A= 2015-02-21 19:01:18,426 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/Dispatcher.class=0A= 2015-02-21 19:01:18,427 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/GenericExceptionHandler.class=0A= 2015-02-21 19:01:18,427 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/WebAppException.class=0A= 2015-02-21 19:01:18,427 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/webapp/util/WebAppUtils.class=0A= 2015-02-21 19:01:18,427 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/package-info.class=0A= 2015-02-21 19:01:18,427 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientTimelineSecurityInfo$1.class=0A= 2015-02-21 19:01:18,427 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientTimelineSecurityInfo$2.class=0A= 2015-02-21 19:01:18,427 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientTimelineSecurityInfo.class=0A= 2015-02-21 19:01:18,427 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/TimelineDelegationTokenOperation.c= lass=0A= 2015-02-21 19:01:18,427 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/TimelineAuthenticationConsts.class=0A= 2015-02-21 19:01:18,427 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientRMSecurityInfo$1.class=0A= 2015-02-21 19:01:18,428 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientRMSecurityInfo$2.class=0A= 2015-02-21 19:01:18,428 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientRMSecurityInfo.class=0A= 2015-02-21 19:01:18,428 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/BaseClientToAMTokenSecretManager.c= lass=0A= 2015-02-21 19:01:18,428 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientToAMTokenIdentifier$Renewer.= class=0A= 2015-02-21 19:01:18,428 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientToAMTokenIdentifier.class=0A= 2015-02-21 19:01:18,428 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/TimelineDelegationTokenSelector.cl= ass=0A= 2015-02-21 19:01:18,428 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/TimelineDelegationTokenIdentifier$= Renewer.class=0A= 2015-02-21 19:01:18,428 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/TimelineDelegationTokenIdentifier.= class=0A= 2015-02-21 19:01:18,428 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/RMDelegationTokenIdentifier$Renewe= r.class=0A= 2015-02-21 19:01:18,428 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/RMDelegationTokenIdentifier.class=0A= 2015-02-21 19:01:18,429 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/RMDelegationTokenSelector.class=0A= 2015-02-21 19:01:18,429 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientToAMTokenSecretManager.class=0A= 2015-02-21 19:01:18,429 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/client/ClientToAMTokenSelector.class=0A= 2015-02-21 19:01:18,429 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/package-info.class=0A= 2015-02-21 19:01:18,429 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/admin/package-info.class=0A= 2015-02-21 19:01:18,429 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/admin/AdminSecurityInfo$1.class=0A= 2015-02-21 19:01:18,429 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/admin/AdminSecurityInfo.class=0A= 2015-02-21 19:01:18,429 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/AMRMTokenIdentifier$Renewer.class=0A= 2015-02-21 19:01:18,429 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/AMRMTokenIdentifier.class=0A= 2015-02-21 19:01:18,429 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/ContainerTokenIdentifier$Renewer.class=0A= 2015-02-21 19:01:18,429 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/ContainerTokenIdentifier.class=0A= 2015-02-21 19:01:18,430 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/SchedulerSecurityInfo$1.class=0A= 2015-02-21 19:01:18,430 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/SchedulerSecurityInfo.class=0A= 2015-02-21 19:01:18,430 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/AMRMTokenSelector.class=0A= 2015-02-21 19:01:18,430 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/AdminACLsManager.class=0A= 2015-02-21 19:01:18,430 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/ContainerManagerSecurityInfo$1.class=0A= 2015-02-21 19:01:18,430 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/ContainerManagerSecurityInfo.class=0A= 2015-02-21 19:01:18,430 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/NMTokenIdentifier.class=0A= 2015-02-21 19:01:18,430 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/NMTokenSelector.class=0A= 2015-02-21 19:01:18,430 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/security/ContainerTokenSelector.class=0A= 2015-02-21 19:01:18,430 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/service/package-info.class=0A= 2015-02-21 19:01:18,431 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/service/ContainerManagementProtocolPBS= erviceImpl.class=0A= 2015-02-21 19:01:18,431 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/service/ApplicationMasterProtocolPBSer= viceImpl.class=0A= 2015-02-21 19:01:18,431 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/service/ApplicationHistoryProtocolPBSe= rviceImpl.class=0A= 2015-02-21 19:01:18,431 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/service/ApplicationClientProtocolPBSer= viceImpl.class=0A= 2015-02-21 19:01:18,431 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/client/package-info.class=0A= 2015-02-21 19:01:18,431 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/client/ContainerManagementProtocolPBCl= ientImpl.class=0A= 2015-02-21 19:01:18,432 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/client/ApplicationMasterProtocolPBClie= ntImpl.class=0A= 2015-02-21 19:01:18,432 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/client/ApplicationHistoryProtocolPBCli= entImpl.class=0A= 2015-02-21 19:01:18,432 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/impl/pb/client/ApplicationClientProtocolPBClie= ntImpl.class=0A= 2015-02-21 19:01:18,432 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/package-info.class=0A= 2015-02-21 19:01:18,432 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terRequestPBImpl.class=0A= 2015-02-21 19:01:18,432 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationAttemptR= eportRequestPBImpl.class=0A= 2015-02-21 19:01:18,433 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetNewApplicationReque= stPBImpl.class=0A= 2015-02-21 19:01:18,433 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainersRequestPB= Impl.class=0A= 2015-02-21 19:01:18,433 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/SubmitApplicationReque= stPBImpl.class=0A= 2015-02-21 19:01:18,433 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationAttempts= RequestPBImpl.class=0A= 2015-02-21 19:01:18,433 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationReportRe= sponsePBImpl.class=0A= 2015-02-21 19:01:18,433 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/CancelDelegationTokenR= esponsePBImpl.class=0A= 2015-02-21 19:01:18,433 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $1$1.class=0A= 2015-02-21 19:01:18,433 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $1.class=0A= 2015-02-21 19:01:18,433 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $2$1.class=0A= 2015-02-21 19:01:18,434 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $2.class=0A= 2015-02-21 19:01:18,434 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $3$1.class=0A= 2015-02-21 19:01:18,434 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $3.class=0A= 2015-02-21 19:01:18,434 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $4$1.class=0A= 2015-02-21 19:01:18,434 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $4.class=0A= 2015-02-21 19:01:18,434 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $5$1.class=0A= 2015-02-21 19:01:18,434 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $5.class=0A= 2015-02-21 19:01:18,434 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $6$1.class=0A= 2015-02-21 19:01:18,434 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= $6.class=0A= 2015-02-21 19:01:18,434 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl= .class=0A= 2015-02-21 19:01:18,435 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terResponsePBImpl$1$1.class=0A= 2015-02-21 19:01:18,435 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terResponsePBImpl$1.class=0A= 2015-02-21 19:01:18,435 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terResponsePBImpl$2$1.class=0A= 2015-02-21 19:01:18,435 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terResponsePBImpl$2.class=0A= 2015-02-21 19:01:18,435 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terResponsePBImpl$3$1.class=0A= 2015-02-21 19:01:18,435 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terResponsePBImpl$3.class=0A= 2015-02-21 19:01:18,435 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMas= terResponsePBImpl.class=0A= 2015-02-21 19:01:18,436 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RenewDelegationTokenRe= sponsePBImpl.class=0A= 2015-02-21 19:01:18,436 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetQueueInfoResponsePB= Impl.class=0A= 2015-02-21 19:01:18,436 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl$= 1$1.class=0A= 2015-02-21 19:01:18,436 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl$= 1.class=0A= 2015-02-21 19:01:18,436 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl$= 2$1.class=0A= 2015-02-21 19:01:18,436 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl$= 2.class=0A= 2015-02-21 19:01:18,436 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl$= 3$1.class=0A= 2015-02-21 19:01:18,436 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl$= 3.class=0A= 2015-02-21 19:01:18,436 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateRequestPBImpl.= class=0A= 2015-02-21 19:01:18,437 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetNewApplicationRespo= nsePBImpl.class=0A= 2015-02-21 19:01:18,437 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetDelegationTokenResp= onsePBImpl.class=0A= 2015-02-21 19:01:18,437 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainersRespons= ePBImpl$1$1.class=0A= 2015-02-21 19:01:18,437 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainersRespons= ePBImpl$1.class=0A= 2015-02-21 19:01:18,437 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainersRespons= ePBImpl$2$1.class=0A= 2015-02-21 19:01:18,437 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainersRespons= ePBImpl$2.class=0A= 2015-02-21 19:01:18,437 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainersRespons= ePBImpl.class=0A= 2015-02-21 19:01:18,438 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetDelegationTokenRequ= estPBImpl.class=0A= 2015-02-21 19:01:18,438 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StopContainersResponse= PBImpl$1$1.class=0A= 2015-02-21 19:01:18,438 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StopContainersResponse= PBImpl$1.class=0A= 2015-02-21 19:01:18,438 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StopContainersResponse= PBImpl.class=0A= 2015-02-21 19:01:18,438 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainersRequest= PBImpl.class=0A= 2015-02-21 19:01:18,438 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StartContainerRequestP= BImpl.class=0A= 2015-02-21 19:01:18,438 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationAttempts= ResponsePBImpl$1$1.class=0A= 2015-02-21 19:01:18,438 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationAttempts= ResponsePBImpl$1.class=0A= 2015-02-21 19:01:18,439 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationAttempts= ResponsePBImpl.class=0A= 2015-02-21 19:01:18,439 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainerStatusesRe= questPBImpl.class=0A= 2015-02-21 19:01:18,439 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequest= PBImpl$1$1.class=0A= 2015-02-21 19:01:18,439 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequest= PBImpl$1.class=0A= 2015-02-21 19:01:18,439 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequest= PBImpl.class=0A= 2015-02-21 19:01:18,439 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainersResponseP= BImpl$1$1.class=0A= 2015-02-21 19:01:18,439 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainersResponseP= BImpl$1.class=0A= 2015-02-21 19:01:18,439 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainersResponseP= BImpl.class=0A= 2015-02-21 19:01:18,440 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationAttemptR= eportResponsePBImpl.class=0A= 2015-02-21 19:01:18,440 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterMetricsReque= stPBImpl.class=0A= 2015-02-21 19:01:18,440 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetQueueUserAclsInfoRe= questPBImpl.class=0A= 2015-02-21 19:01:18,440 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/MoveApplicationAcrossQ= ueuesResponsePBImpl.class=0A= 2015-02-21 19:01:18,440 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/FinishApplicationMaste= rRequestPBImpl.class=0A= 2015-02-21 19:01:18,440 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterMetricsRespo= nsePBImpl.class=0A= 2015-02-21 19:01:18,440 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainerReportResp= onsePBImpl.class=0A= 2015-02-21 19:01:18,441 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationReportRe= questPBImpl.class=0A= 2015-02-21 19:01:18,441 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterNodesRespons= ePBImpl$1$1.class=0A= 2015-02-21 19:01:18,441 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterNodesRespons= ePBImpl$1.class=0A= 2015-02-21 19:01:18,441 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterNodesRespons= ePBImpl.class=0A= 2015-02-21 19:01:18,441 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/MoveApplicationAcrossQ= ueuesRequestPBImpl.class=0A= 2015-02-21 19:01:18,441 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/FinishApplicationMaste= rResponsePBImpl.class=0A= 2015-02-21 19:01:18,441 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/KillApplicationRespons= ePBImpl.class=0A= 2015-02-21 19:01:18,441 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetQueueUserAclsInfoRe= sponsePBImpl$1$1.class=0A= 2015-02-21 19:01:18,442 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetQueueUserAclsInfoRe= sponsePBImpl$1.class=0A= 2015-02-21 19:01:18,442 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetQueueUserAclsInfoRe= sponsePBImpl.class=0A= 2015-02-21 19:01:18,442 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainerStatusesRe= sponsePBImpl.class=0A= 2015-02-21 19:01:18,442 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RenewDelegationTokenRe= questPBImpl.class=0A= 2015-02-21 19:01:18,442 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/CancelDelegationTokenR= equestPBImpl.class=0A= 2015-02-21 19:01:18,442 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRespons= ePBImpl$1$1.class=0A= 2015-02-21 19:01:18,442 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRespons= ePBImpl$1.class=0A= 2015-02-21 19:01:18,443 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRespons= ePBImpl.class=0A= 2015-02-21 19:01:18,443 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterNodesRequest= PBImpl$1$1.class=0A= 2015-02-21 19:01:18,443 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterNodesRequest= PBImpl$1.class=0A= 2015-02-21 19:01:18,443 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetClusterNodesRequest= PBImpl.class=0A= 2015-02-21 19:01:18,443 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetQueueInfoRequestPBI= mpl.class=0A= 2015-02-21 19:01:18,443 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/SubmitApplicationRespo= nsePBImpl.class=0A= 2015-02-21 19:01:18,443 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/KillApplicationRequest= PBImpl.class=0A= 2015-02-21 19:01:18,443 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetContainerReportRequ= estPBImpl.class=0A= 2015-02-21 19:01:18,444 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/protocolrecords/impl/pb/StopContainersRequestP= BImpl.class=0A= 2015-02-21 19:01:18,444 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/package-info.class=0A= 2015-02-21 19:01:18,444 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.class=0A= 2015-02-21 19:01:18,444 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PriorityPBImpl.class=0A= 2015-02-21 19:01:18,444 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ResourcePBImpl.class=0A= 2015-02-21 19:01:18,444 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ApplicationAttemptIdPBImpl.cla= ss=0A= 2015-02-21 19:01:18,444 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ApplicationIdPBImpl.class=0A= 2015-02-21 19:01:18,444 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContractPBImpl$1$1.c= lass=0A= 2015-02-21 19:01:18,445 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContractPBImpl$1.cla= ss=0A= 2015-02-21 19:01:18,445 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContractPBImpl$2$1.c= lass=0A= 2015-02-21 19:01:18,445 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContractPBImpl$2.cla= ss=0A= 2015-02-21 19:01:18,445 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContractPBImpl.class=0A= 2015-02-21 19:01:18,445 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContainerPBImpl.clas= s=0A= 2015-02-21 19:01:18,445 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionResourceRequestPBImp= l.class=0A= 2015-02-21 19:01:18,445 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerIdPBImpl.class=0A= 2015-02-21 19:01:18,445 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ApplicationReportPBImpl.class=0A= 2015-02-21 19:01:18,446 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/TokenPBImpl.class=0A= 2015-02-21 19:01:18,446 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPB= Impl.class=0A= 2015-02-21 19:01:18,446 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$1= $1.class=0A= 2015-02-21 19:01:18,446 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$1= .class=0A= 2015-02-21 19:01:18,446 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$2= $1.class=0A= 2015-02-21 19:01:18,446 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$2= .class=0A= 2015-02-21 19:01:18,447 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$3= $1.class=0A= 2015-02-21 19:01:18,447 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$3= .class=0A= 2015-02-21 19:01:18,447 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$4= $1.class=0A= 2015-02-21 19:01:18,447 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl$4= .class=0A= 2015-02-21 19:01:18,447 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.c= lass=0A= 2015-02-21 19:01:18,447 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/LocalResourcePBImpl.class=0A= 2015-02-21 19:01:18,447 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/URLPBImpl.class=0A= 2015-02-21 19:01:18,447 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ResourceOptionPBImpl.class=0A= 2015-02-21 19:01:18,448 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/NodeIdPBImpl.class=0A= 2015-02-21 19:01:18,448 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueInfoPBImpl$1$1.class=0A= 2015-02-21 19:01:18,448 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueInfoPBImpl$1.class=0A= 2015-02-21 19:01:18,448 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueInfoPBImpl$2$1.class=0A= 2015-02-21 19:01:18,448 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueInfoPBImpl$2.class=0A= 2015-02-21 19:01:18,448 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueInfoPBImpl.class=0A= 2015-02-21 19:01:18,448 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/NodeReportPBImpl.class=0A= 2015-02-21 19:01:18,448 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerPBImpl.class=0A= 2015-02-21 19:01:18,449 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerStatusPBImpl.class=0A= 2015-02-21 19:01:18,449 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/PreemptionMessagePBImpl.class=0A= 2015-02-21 19:01:18,449 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/StrictPreemptionContractPBImpl= $1$1.class=0A= 2015-02-21 19:01:18,449 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/StrictPreemptionContractPBImpl= $1.class=0A= 2015-02-21 19:01:18,449 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/StrictPreemptionContractPBImpl= .class=0A= 2015-02-21 19:01:18,449 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/SerializedExceptionPBImpl.clas= s=0A= 2015-02-21 19:01:18,450 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ProtoBase.class=0A= 2015-02-21 19:01:18,450 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerResourceIncreaseReque= stPBImpl.class=0A= 2015-02-21 19:01:18,450 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ResourceBlacklistRequestPBImpl= .class=0A= 2015-02-21 19:01:18,450 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ProtoUtils.class=0A= 2015-02-21 19:01:18,450 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ApplicationAttemptReportPBImpl= .class=0A= 2015-02-21 19:01:18,450 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerResourceDecreasePBImp= l.class=0A= 2015-02-21 19:01:18,450 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerReportPBImpl.class=0A= 2015-02-21 19:01:18,451 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/YarnClusterMetricsPBImpl.class=0A= 2015-02-21 19:01:18,451 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ApplicationResourceUsageReport= PBImpl.class=0A= 2015-02-21 19:01:18,451 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueUserACLInfoPBImpl$1$1.cla= ss=0A= 2015-02-21 19:01:18,451 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueUserACLInfoPBImpl$1.class=0A= 2015-02-21 19:01:18,451 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/QueueUserACLInfoPBImpl.class=0A= 2015-02-21 19:01:18,451 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/NMTokenPBImpl.class=0A= 2015-02-21 19:01:18,451 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/records/impl/pb/ContainerResourceIncreasePBImp= l.class=0A= 2015-02-21 19:01:18,452 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/ApplicationClientProtocolPB.class=0A= 2015-02-21 19:01:18,452 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/ContainerManagementProtocolPB.class=0A= 2015-02-21 19:01:18,452 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/ApplicationMasterProtocolPB.class=0A= 2015-02-21 19:01:18,452 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/api/ApplicationHistoryProtocolPB.class=0A= 2015-02-21 19:01:18,452 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/package-info.cl= ass=0A= 2015-02-21 19:01:18,452 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshUserToGr= oupsMappingsResponsePBImpl.class=0A= 2015-02-21 19:01:18,452 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshSuperUse= rGroupsConfigurationResponsePBImpl.class=0A= 2015-02-21 19:01:18,452 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshServiceA= clsRequestPBImpl.class=0A= 2015-02-21 19:01:18,452 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshServiceA= clsResponsePBImpl.class=0A= 2015-02-21 19:01:18,452 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/UpdateNodeResou= rceRequestPBImpl$1$1.class=0A= 2015-02-21 19:01:18,453 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/UpdateNodeResou= rceRequestPBImpl$1.class=0A= 2015-02-21 19:01:18,453 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/UpdateNodeResou= rceRequestPBImpl.class=0A= 2015-02-21 19:01:18,453 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshSuperUse= rGroupsConfigurationRequestPBImpl.class=0A= 2015-02-21 19:01:18,453 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshQueuesRe= questPBImpl.class=0A= 2015-02-21 19:01:18,453 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshUserToGr= oupsMappingsRequestPBImpl.class=0A= 2015-02-21 19:01:18,453 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshNodesRes= ponsePBImpl.class=0A= 2015-02-21 19:01:18,453 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshAdminAcl= sResponsePBImpl.class=0A= 2015-02-21 19:01:18,453 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/UpdateNodeResou= rceResponsePBImpl.class=0A= 2015-02-21 19:01:18,453 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshAdminAcl= sRequestPBImpl.class=0A= 2015-02-21 19:01:18,454 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshQueuesRe= sponsePBImpl.class=0A= 2015-02-21 19:01:18,454 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RefreshNodesReq= uestPBImpl.class=0A= 2015-02-21 19:01:18,454 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/ResourceManagerAdministrationProtocolPB= .class=0A= 2015-02-21 19:01:18,454 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/impl/pb/client/ResourceManagerAdministr= ationProtocolPBClientImpl.class=0A= 2015-02-21 19:01:18,454 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/api/impl/pb/service/ResourceManagerAdminist= rationProtocolPBServiceImpl.class=0A= 2015-02-21 19:01:18,454 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/security/package-info.class=0A= 2015-02-21 19:01:18,454 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/server/security/ApplicationACLsManager.class=0A= 2015-02-21 19:01:18,455 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/package-info.class=0A= 2015-02-21 19:01:18,455 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/AbstractEvent.class=0A= 2015-02-21 19:01:18,455 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/Event.class=0A= 2015-02-21 19:01:18,455 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/EventHandler.class=0A= 2015-02-21 19:01:18,455 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/Dispatcher.class=0A= 2015-02-21 19:01:18,455 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/AsyncDispatcher$1.class=0A= 2015-02-21 19:01:18,455 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/AsyncDispatcher$GenericEventHandler.class=0A= 2015-02-21 19:01:18,455 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/AsyncDispatcher$MultiListenerHandler.class=0A= 2015-02-21 19:01:18,455 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/event/AsyncDispatcher.class=0A= 2015-02-21 19:01:18,455 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/ipc/package-info.class=0A= 2015-02-21 19:01:18,456 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/ipc/RPCUtil.class=0A= 2015-02-21 19:01:18,456 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/ipc/YarnRPC.class=0A= 2015-02-21 19:01:18,456 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/ipc/HadoopYarnProtoRPC.class=0A= 2015-02-21 19:01:18,456 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/timeline/package-info.class=0A= 2015-02-21 19:01:18,456 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/timeline/TimelineUtils.class=0A= 2015-02-21 19:01:18,456 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/package-info.class=0A= 2015-02-21 19:01:18,456 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/StringHelper.class=0A= 2015-02-21 19:01:18,456 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ConverterUtils.class=0A= 2015-02-21 19:01:18,456 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/FSDownload$1.class=0A= 2015-02-21 19:01:18,457 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/FSDownload$2.class=0A= 2015-02-21 19:01:18,457 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/FSDownload$3.class=0A= 2015-02-21 19:01:18,457 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/FSDownload$4.class=0A= 2015-02-21 19:01:18,457 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/FSDownload.class=0A= 2015-02-21 19:01:18,457 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/RMHAUtils.class=0A= 2015-02-21 19:01:18,457 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ProcfsBasedProcessTree$MemInfo.class=0A= 2015-02-21 19:01:18,457 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ProcfsBasedProcessTree$ProcessInfo.class=0A= 2015-02-21 19:01:18,457 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ProcfsBasedProcessTree$ProcessTreeSmapMemInfo= .class=0A= 2015-02-21 19:01:18,458 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ProcfsBasedProcessTree$ProcessSmapMemoryInfo.= class=0A= 2015-02-21 19:01:18,458 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ProcfsBasedProcessTree$1.class=0A= 2015-02-21 19:01:18,458 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.class=0A= 2015-02-21 19:01:18,458 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ResourceCalculatorProcessTree.class=0A= 2015-02-21 19:01:18,459 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/Times$1.class=0A= 2015-02-21 19:01:18,459 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/Times.class=0A= 2015-02-21 19:01:18,459 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/RackResolver.class=0A= 2015-02-21 19:01:18,459 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ResourceCalculatorPlugin.class=0A= 2015-02-21 19:01:18,459 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/WindowsResourceCalculatorPlugin.class=0A= 2015-02-21 19:01:18,459 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/WindowsBasedProcessTree$ProcessInfo.class=0A= 2015-02-21 19:01:18,459 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/WindowsBasedProcessTree.class=0A= 2015-02-21 19:01:18,459 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.class=0A= 2015-02-21 19:01:18,460 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/resource/ResourceCalculator.class=0A= 2015-02-21 19:01:18,460 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/resource/Resources$1.class=0A= 2015-02-21 19:01:18,460 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/resource/Resources$2.class=0A= 2015-02-21 19:01:18,460 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/resource/Resources.class=0A= 2015-02-21 19:01:18,460 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.class=0A= 2015-02-21 19:01:18,460 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/Clock.class=0A= 2015-02-21 19:01:18,460 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/SystemClock.class=0A= 2015-02-21 19:01:18,460 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/Apps.class=0A= 2015-02-21 19:01:18,460 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/TrackingUriPlugin.class=0A= 2015-02-21 19:01:18,460 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/YarnVersionInfo.class=0A= 2015-02-21 19:01:18,461 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/LinuxResourceCalculatorPlugin.class=0A= 2015-02-21 19:01:18,461 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/AbstractLivelinessMonitor$PingChecker.class=0A= 2015-02-21 19:01:18,461 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/AbstractLivelinessMonitor$1.class=0A= 2015-02-21 19:01:18,461 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/AbstractLivelinessMonitor.class=0A= 2015-02-21 19:01:18,461 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ApplicationClassLoader$1.class=0A= 2015-02-21 19:01:18,461 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/ApplicationClassLoader.class=0A= 2015-02-21 19:01:18,461 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/util/AuxiliaryServiceHelper.class=0A= 2015-02-21 19:01:18,461 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/package-info.class=0A= 2015-02-21 19:01:18,461 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachine.class=0A= 2015-02-21 19:01:18,462 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/InvalidStateTransitonException.class=0A= 2015-02-21 19:01:18,462 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/SingleArcTransition.class=0A= 2015-02-21 19:01:18,462 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/VisualizeStateMachine.class=0A= 2015-02-21 19:01:18,462 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/Graph$Edge.class=0A= 2015-02-21 19:01:18,462 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/Graph$Node.class=0A= 2015-02-21 19:01:18,462 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/Graph.class=0A= 2015-02-21 19:01:18,463 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/MultipleArcTransition.class=0A= 2015-02-21 19:01:18,463 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory$ApplicableTransition.cla= ss=0A= 2015-02-21 19:01:18,463 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory$TransitionsListNode.clas= s=0A= 2015-02-21 19:01:18,463 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory$ApplicableSingleOrMultip= leTransition.class=0A= 2015-02-21 19:01:18,463 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory$Transition.class=0A= 2015-02-21 19:01:18,463 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory$SingleInternalArc.class=0A= 2015-02-21 19:01:18,463 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory$MultipleInternalArc.clas= s=0A= 2015-02-21 19:01:18,463 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory$InternalStateMachine.cla= ss=0A= 2015-02-21 19:01:18,463 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/state/StateMachineFactory.class=0A= 2015-02-21 19:01:18,463 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/impl/pb/package-info.class=0A= 2015-02-21 19:01:18,464 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/impl/pb/RecordFactoryPBImpl.class=0A= 2015-02-21 19:01:18,464 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/impl/pb/RpcClientFactoryPBImpl.class=0A= 2015-02-21 19:01:18,464 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/impl/pb/RpcServerFactoryPBImpl.class=0A= 2015-02-21 19:01:18,464 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/package-info.class=0A= 2015-02-21 19:01:18,464 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/RpcServerFactory.class=0A= 2015-02-21 19:01:18,464 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factories/RpcClientFactory.class=0A= 2015-02-21 19:01:18,464 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/package-info.class=0A= 2015-02-21 19:01:18,464 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogDeletionService$LogDel= etionTask.class=0A= 2015-02-21 19:01:18,464 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogDeletionService.class=0A= 2015-02-21 19:01:18,465 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat$LogKey.class=0A= 2015-02-21 19:01:18,465 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat$LogValue.class=0A= 2015-02-21 19:01:18,465 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat$LogWriter$1.cla= ss=0A= 2015-02-21 19:01:18,465 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat$LogWriter.class=0A= 2015-02-21 19:01:18,465 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat$LogReader.class=0A= 2015-02-21 19:01:18,465 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat$ContainerLogsRe= ader.class=0A= 2015-02-21 19:01:18,465 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.class=0A= 2015-02-21 19:01:18,465 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/ContainerLogsRetentionPolicy.class=0A= 2015-02-21 19:01:18,466 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/LogAggregationUtils.class=0A= 2015-02-21 19:01:18,466 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/logaggregation/LogCLIHelpers.class=0A= 2015-02-21 19:01:18,466 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factory/providers/package-info.class=0A= 2015-02-21 19:01:18,466 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/factory/providers/RpcFactoryProvider.class=0A= 2015-02-21 19:01:18,466 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/package-info.class=0A= 2015-02-21 19:01:18,466 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/FileSystemBasedConfigurationProvider.class=0A= 2015-02-21 19:01:18,466 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/ConfiguredRMFailoverProxyProvider.class=0A= 2015-02-21 19:01:18,466 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/RMFailoverProxyProvider.class=0A= 2015-02-21 19:01:18,466 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/RMProxy$1.class=0A= 2015-02-21 19:01:18,466 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/RMProxy.class=0A= 2015-02-21 19:01:18,467 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/AHSProxy$1.class=0A= 2015-02-21 19:01:18,467 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/AHSProxy.class=0A= 2015-02-21 19:01:18,467 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/RMHAServiceTarget.class=0A= 2015-02-21 19:01:18,467 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/ClientRMProxy$ClientRMProtocols.class=0A= 2015-02-21 19:01:18,467 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/ClientRMProxy.class=0A= 2015-02-21 19:01:18,467 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/NMProxy.class=0A= 2015-02-21 19:01:18,467 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/ServerProxy$1.class=0A= 2015-02-21 19:01:18,467 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/client/ServerProxy.class=0A= 2015-02-21 19:01:18,467 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/LocalConfigurationProvider.class=0A= 2015-02-21 19:01:18,468 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/ContainerRollingLogAppender.class=0A= 2015-02-21 19:01:18,468 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/YarnUncaughtExceptionHandler.class=0A= 2015-02-21 19:01:18,468 DEBUG [main] org.mortbay.log: Skipping entry: = org/apache/hadoop/yarn/ContainerLogAppender.class=0A= 2015-02-21 19:01:18,468 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/maven/=0A= 2015-02-21 19:01:18,468 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/maven/org.apache.hadoop/=0A= 2015-02-21 19:01:18,468 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/maven/org.apache.hadoop/hadoop-yarn-common/=0A= 2015-02-21 19:01:18,468 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/maven/org.apache.hadoop/hadoop-yarn-common/pom.xml=0A= 2015-02-21 19:01:18,468 DEBUG [main] org.mortbay.log: Skipping entry: = META-INF/maven/org.apache.hadoop/hadoop-yarn-common/pom.properties=0A= 2015-02-21 19:01:18,469 DEBUG [main] org.mortbay.log: Checking Resource = aliases=0A= 2015-02-21 19:01:18,469 DEBUG [main] org.mortbay.log: = webapp=3Dfile:/tmp/Jetty_0_0_0_0_51221_mapreduce____6xjp50/webapp/=0A= 2015-02-21 19:01:18,483 DEBUG [main] org.mortbay.log: = getResource(org/mortbay/jetty/webapp/webdefault.xml)=3Djar:file:/opt/clou= dera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jetty-6.1.26.cloudera.4.jar!= /org/mortbay/jetty/webapp/webdefault.xml=0A= 2015-02-21 19:01:18,483 DEBUG [main] org.mortbay.log: parse: = jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jetty-6.1.= 26.cloudera.4.jar!/org/mortbay/jetty/webapp/webdefault.xml=0A= 2015-02-21 19:01:18,485 DEBUG [main] org.mortbay.log: parsing: = sid=3Djar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jett= y-6.1.26.cloudera.4.jar!/org/mortbay/jetty/webapp/webdefault.xml,pid=3Dnu= ll=0A= 2015-02-21 19:01:18,495 DEBUG [main] org.mortbay.log: ContextParam: = org.mortbay.jetty.webapp.NoTLDJarPattern=3Dstart.jar|ant-.*\.jar|dojo-.*\= .jar|jetty-.*\.jar|jsp-api-.*\.jar|junit-.*\.jar|servlet-api-.*\.jar|dnsn= s\.jar|rt\.jar|jsse\.jar|tools\.jar|sunpkcs11\.jar|sunjce_provider\.jar|x= erces.*\.jar=0A= 2015-02-21 19:01:18,497 DEBUG [main] org.mortbay.log: loaded class = org.apache.jasper.servlet.JspServlet from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:18,500 DEBUG [main] org.mortbay.log: = filterNameMap=3D{guice=3Dguice, safety=3Dsafety, = NoCacheFilter=3DNoCacheFilter, AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,500 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15), (F=3DAM_PROXY_FILTER,[/conf],[],15), = (F=3DAM_PROXY_FILTER,[/mapreduce/*],[],15), = (F=3DAM_PROXY_FILTER,[/ws/*],[],15), (F=3Dguice,[/*],[],15)]=0A= 2015-02-21 19:01:18,500 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,500 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/metrics=3Dmetrics, /conf=3Dconf, /jmx=3Djmx, = /stacks=3Dstacks, /logLevel=3DlogLevel}=0A= 2015-02-21 19:01:18,500 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jmx=3Djmx, metrics=3Dmetrics, logLevel=3DlogLevel, = conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 19:01:18,500 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + default as servlet=0A= 2015-02-21 19:01:18,500 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + jsp as servlet=0A= 2015-02-21 19:01:18,500 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (S=3Ddefault,[/]) as servletMapping=0A= 2015-02-21 19:01:18,500 DEBUG [main] org.mortbay.log: Container = ServletHandler@23f3e3fd + (S=3Djsp,[*.jsp, *.jspf, *.jspx, *.xsp, *.JSP, = *.JSPF, *.JSPX, *.XSP]) as servletMapping=0A= 2015-02-21 19:01:18,500 DEBUG [main] org.mortbay.log: = filterNameMap=3D{guice=3Dguice, safety=3Dsafety, = NoCacheFilter=3DNoCacheFilter, AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,500 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15), (F=3DAM_PROXY_FILTER,[/conf],[],15), = (F=3DAM_PROXY_FILTER,[/mapreduce/*],[],15), = (F=3DAM_PROXY_FILTER,[/ws/*],[],15), (F=3Dguice,[/*],[],15)]=0A= 2015-02-21 19:01:18,500 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,500 DEBUG [main] org.mortbay.log: = servletPathMap=3D{*.XSP=3Djsp, *.jsp=3Djsp, *.jspx=3Djsp, *.JSPF=3Djsp, = /conf=3Dconf, /=3Ddefault, *.xsp=3Djsp, /stacks=3Dstacks, = /logLevel=3DlogLevel, *.JSPX=3Djsp, *.jspf=3Djsp, /metrics=3Dmetrics, = /jmx=3Djmx, *.JSP=3Djsp}=0A= 2015-02-21 19:01:18,501 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jsp=3Djsp, default=3Ddefault, jmx=3Djmx, = metrics=3Dmetrics, logLevel=3DlogLevel, conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 19:01:18,501 DEBUG [main] org.mortbay.log: Configuring = web-jetty.xml=0A= 2015-02-21 19:01:18,502 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-aws-2.5= .0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,570 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-tools-= 1.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,571 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-common-= 2.5.0-cdh5.3.0-tests.jar=0A= 2015-02-21 19:01:18,576 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-format= -2.1.0-cdh5.3.0-sources.jar=0A= 2015-02-21 19:01:18,576 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-annotat= ions-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,577 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-column= -1.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,580 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-common= -1.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,580 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-test-h= adoop2-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,581 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-auth-2.= 5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,581 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-scroog= e_2.10-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,581 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-protob= uf-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,582 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-common-= 2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,590 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-genera= tor-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,590 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-hadoop= -1.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,591 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-encodi= ng-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,592 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-hadoop= -bundle-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,599 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-nfs-2.5= .0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,600 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-format= -2.1.0-cdh5.3.0-javadoc.jar=0A= 2015-02-21 19:01:18,600 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-scala_= 2.10-1.5.0-cdh5.4.0-SNAPSHOT.jar=0A= 2015-02-21 19:01:18,600 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-format= -2.1.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,601 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-avro-1= .5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,602 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-thrift= -1.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,603 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-cascad= ing-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,603 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-pig-1.= 5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,603 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-pig-bu= ndle-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,609 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/parquet-jackso= n-1.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,646 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jackson-core-a= sl-1.8.8.jar=0A= 2015-02-21 19:01:18,647 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jackson-xc-1.8= .8.jar=0A= 2015-02-21 19:01:18,647 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-config= uration-1.6.jar=0A= 2015-02-21 19:01:18,648 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/curator-recipe= s-2.6.0.jar=0A= 2015-02-21 19:01:18,648 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/httpclient-4.2= .5.jar=0A= 2015-02-21 19:01:18,649 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jasper-compile= r-5.5.23.jar=0A= 2015-02-21 19:01:18,654 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/snappy-java-1.= 0.4.1.jar=0A= 2015-02-21 19:01:18,654 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/netty-3.6.2.Fi= nal.jar=0A= 2015-02-21 19:01:18,655 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/log4j-1.2.17.j= ar=0A= 2015-02-21 19:01:18,657 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/slf4j-log4j12-= 1.7.5.jar=0A= 2015-02-21 19:01:18,657 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/httpcore-4.2.5= .jar=0A= 2015-02-21 19:01:18,657 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jersey-core-1.= 9.jar=0A= 2015-02-21 19:01:18,658 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-loggin= g-1.1.3.jar=0A= 2015-02-21 19:01:18,658 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/slf4j-api-1.7.= 5.jar=0A= 2015-02-21 19:01:18,658 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-el-1.0= .jar=0A= 2015-02-21 19:01:18,659 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/lib/aws-= java-sdk-1.7.4.jar=0A= 2015-02-21 19:01:18,669 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsr305-1.3.9.j= ar=0A= 2015-02-21 19:01:18,669 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/guava-11.0.2.j= ar=0A= 2015-02-21 19:01:18,671 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/avro-1.7.6-cdh= 5.3.0.jar=0A= 2015-02-21 19:01:18,672 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/curator-client= -2.6.0.jar=0A= 2015-02-21 19:01:18,672 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jasper-runtime= -5.5.23.jar=0A= 2015-02-21 19:01:18,672 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/apacheds-kerbe= ros-codec-2.0.0-M15.jar=0A= 2015-02-21 19:01:18,673 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/api-util-1.0.0= -M20.jar=0A= 2015-02-21 19:01:18,673 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jets3t-0.9.0.j= ar=0A= 2015-02-21 19:01:18,674 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hamcrest-core-= 1.3.jar=0A= 2015-02-21 19:01:18,674 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-io-2.4= .jar=0A= 2015-02-21 19:01:18,674 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jersey-server-= 1.9.jar=0A= 2015-02-21 19:01:18,675 DEBUG [main] org.mortbay.log: TLD found = jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jersey-ser= ver-1.9.jar!/META-INF/taglib.tld=0A= 2015-02-21 19:01:18,675 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/mockito-all-1.= 8.5.jar=0A= 2015-02-21 19:01:18,677 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jettison-1.1.j= ar=0A= 2015-02-21 19:01:18,677 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/stax-api-1.0-2= .jar=0A= 2015-02-21 19:01:18,677 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-collec= tions-3.2.1.jar=0A= 2015-02-21 19:01:18,678 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-beanut= ils-core-1.8.0.jar=0A= 2015-02-21 19:01:18,678 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/activation-1.1= .jar=0A= 2015-02-21 19:01:18,679 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/xz-1.0.jar=0A= 2015-02-21 19:01:18,679 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/curator-framew= ork-2.6.0.jar=0A= 2015-02-21 19:01:18,679 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-lang-2= .6.jar=0A= 2015-02-21 19:01:18,680 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-cli-1.= 2.jar=0A= 2015-02-21 19:01:18,680 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-digest= er-1.8.jar=0A= 2015-02-21 19:01:18,680 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jersey-json-1.= 9.jar=0A= 2015-02-21 19:01:18,680 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-math3-= 3.1.1.jar=0A= 2015-02-21 19:01:18,682 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/asm-3.2.jar=0A= 2015-02-21 19:01:18,682 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-httpcl= ient-3.1.jar=0A= 2015-02-21 19:01:18,682 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/gson-2.2.4.jar=0A= 2015-02-21 19:01:18,683 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/api-asn1-api-1= .0.0-M20.jar=0A= 2015-02-21 19:01:18,683 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/apacheds-i18n-= 2.0.0-M15.jar=0A= 2015-02-21 19:01:18,683 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-net-3.= 1.jar=0A= 2015-02-21 19:01:18,683 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/zookeeper-3.4.= 5-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,684 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-beanut= ils-1.7.0.jar=0A= 2015-02-21 19:01:18,685 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/java-xmlbuilde= r-0.4.jar=0A= 2015-02-21 19:01:18,685 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/xmlenc-0.52.ja= r=0A= 2015-02-21 19:01:18,685 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-codec-= 1.4.jar=0A= 2015-02-21 19:01:18,685 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-compre= ss-1.4.1.jar=0A= 2015-02-21 19:01:18,686 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/paranamer-2.3.= jar=0A= 2015-02-21 19:01:18,686 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hue-plugins-3.= 7.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,688 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jaxb-impl-2.2.= 3-1.jar=0A= 2015-02-21 19:01:18,689 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jaxb-api-2.2.2= .jar=0A= 2015-02-21 19:01:18,689 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsch-0.1.42.ja= r=0A= 2015-02-21 19:01:18,689 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jackson-jaxrs-= 1.8.8.jar=0A= 2015-02-21 19:01:18,689 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jackson-mapper= -asl-1.8.8.jar=0A= 2015-02-21 19:01:18,690 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/protobuf-java-= 2.5.0.jar=0A= 2015-02-21 19:01:18,690 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-hdfs-2.= 5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,694 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-hdfs-nf= s-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,695 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-hdfs-2.= 5.0-cdh5.3.0-tests.jar=0A= 2015-02-21 19:01:18,696 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/commons-daemon= -1.0.13.jar=0A= 2015-02-21 19:01:18,697 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-co= mmon-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,698 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-ap= plications-distributedshell-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,698 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-se= rver-web-proxy-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,698 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-se= rver-nodemanager-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,699 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-se= rver-tests-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,699 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-se= rver-resourcemanager-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,700 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-se= rver-common-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,700 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-ap= i-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,701 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-se= rver-applicationhistoryservice-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,701 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-cl= ient-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,702 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-ap= plications-unmanaged-am-launcher-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,702 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/guice-3.0.jar=0A= 2015-02-21 19:01:18,703 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/aopalliance-1.= 0.jar=0A= 2015-02-21 19:01:18,703 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jersey-guice-1= .9.jar=0A= 2015-02-21 19:01:18,703 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jersey-client-= 1.9.jar=0A= 2015-02-21 19:01:18,703 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/leveldbjni-all= -1.8.jar=0A= 2015-02-21 19:01:18,704 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jline-0.9.94.j= ar=0A= 2015-02-21 19:01:18,704 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/javax.inject-1= .jar=0A= 2015-02-21 19:01:18,704 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/guice-servlet-= 3.0.jar=0A= 2015-02-21 19:01:18,704 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/microsoft-wind= owsazure-storage-sdk-0.6.0.jar=0A= 2015-02-21 19:01:18,705 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-jobclient-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,705 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-shuffle-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,705 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-rumen-2= .5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,706 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-hs-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,706 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-azure-2= .5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,706 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-hs-plugins-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,706 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-nativetask-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,706 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-sls-2.5= .0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,707 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-examples-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,707 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-common-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,708 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-app-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,708 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-gridmix= -2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,708 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/metrics-core-3= .0.1.jar=0A= 2015-02-21 19:01:18,709 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-archive= s-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,709 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-extras-= 2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,709 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jackson-annota= tions-2.2.3.jar=0A= 2015-02-21 19:01:18,709 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-core-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,711 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/joda-time-1.6.= jar=0A= 2015-02-21 19:01:18,712 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-datajoi= n-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,712 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-mapredu= ce-client-jobclient-2.5.0-cdh5.3.0-tests.jar=0A= 2015-02-21 19:01:18,713 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-distcp-= 2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,713 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jackson-core-2= .2.3.jar=0A= 2015-02-21 19:01:18,713 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-streami= ng-2.5.0-cdh5.3.0.jar=0A= 2015-02-21 19:01:18,714 DEBUG [main] org.mortbay.log: TLD search of = file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jackson-databi= nd-2.2.3.jar=0A= 2015-02-21 19:01:18,715 DEBUG [main] org.mortbay.log: TLD search of = file:/data/yarn/nm/usercache/cloudera/appcache/application_1424550134651_= 0002/filecache/10/job.jar/job.jar=0A= 2015-02-21 19:01:18,715 DEBUG [main] org.mortbay.log: TLD search of = file:/usr/java/jdk1.7.0_67/jre/lib/ext/sunec.jar=0A= 2015-02-21 19:01:18,715 DEBUG [main] org.mortbay.log: TLD search of = file:/usr/java/jdk1.7.0_67/jre/lib/ext/zipfs.jar=0A= 2015-02-21 19:01:18,715 DEBUG [main] org.mortbay.log: TLD search of = file:/usr/java/jdk1.7.0_67/jre/lib/ext/localedata.jar=0A= 2015-02-21 19:01:18,716 DEBUG [main] org.mortbay.log: loaded class = com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl from null=0A= 2015-02-21 19:01:18,717 DEBUG [main] org.mortbay.log: loaded class = com.sun.org.apache.xerces.internal.impl.dv.dtd.DTDDVFactoryImpl from null=0A= 2015-02-21 19:01:18,717 DEBUG [main] org.mortbay.log: = getResource(javax/servlet/jsp/resources/web-jsptaglibrary_1_1.dtd)=3Djar:= file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsp-api-2.1.ja= r!/javax/servlet/jsp/resources/web-jsptaglibrary_1_1.dtd=0A= 2015-02-21 19:01:18,717 DEBUG [main] org.mortbay.log: = getResource(javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd)=3Djar:= file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsp-api-2.1.ja= r!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd=0A= 2015-02-21 19:01:18,718 DEBUG [main] org.mortbay.log: = getResource(javax/servlet/jsp/resources/web-jsptaglibrary_2_0.xsd)=3Djar:= file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsp-api-2.1.ja= r!/javax/servlet/jsp/resources/web-jsptaglibrary_2_0.xsd=0A= 2015-02-21 19:01:18,718 DEBUG [main] org.mortbay.log: = getResource(javax/servlet/jsp/resources/web-jsptaglibrary_1_1.dtd)=3Djar:= file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsp-api-2.1.ja= r!/javax/servlet/jsp/resources/web-jsptaglibrary_1_1.dtd=0A= 2015-02-21 19:01:18,718 DEBUG [main] org.mortbay.log: = getResource(javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd)=3Djar:= file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsp-api-2.1.ja= r!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd=0A= 2015-02-21 19:01:18,718 DEBUG [main] org.mortbay.log: = getResource(javax/servlet/jsp/resources/web-jsptaglibrary_2_0.xsd)=3Djar:= file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsp-api-2.1.ja= r!/javax/servlet/jsp/resources/web-jsptaglibrary_2_0.xsd=0A= 2015-02-21 19:01:18,718 DEBUG [main] org.mortbay.log: = TLD=3Djar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jers= ey-server-1.9.jar!/META-INF/taglib.tld=0A= 2015-02-21 19:01:18,721 DEBUG [main] org.mortbay.log: = resolveEntity(-//Sun Microsystems, Inc.//DTD JSP Tag Library 1.2//EN, = http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd)=0A= 2015-02-21 19:01:18,721 DEBUG [main] org.mortbay.log: Can't exact match = entity in redirect map, trying web-jsptaglibrary_1_2.dtd=0A= 2015-02-21 19:01:18,721 DEBUG [main] org.mortbay.log: Redirected entity = http://java.sun.com/dtd/web-jsptaglibrary_1_2.dtd --> = jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jsp-api-2.= 1.jar!/javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd=0A= 2015-02-21 19:01:18,726 DEBUG [main] org.mortbay.log: Container = Server@458ba94d + = org.mortbay.jetty.servlet.HashSessionIdManager@1991a218 as = sessionIdManager=0A= 2015-02-21 19:01:18,726 DEBUG [main] org.mortbay.log: Init SecureRandom.=0A= 2015-02-21 19:01:18,726 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.servlet.HashSessionIdManager@1991a218=0A= 2015-02-21 19:01:18,727 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.servlet.HashSessionManager@4befbfaf=0A= 2015-02-21 19:01:18,727 DEBUG [main] org.mortbay.log: = filterNameMap=3D{guice=3Dguice, safety=3Dsafety, = NoCacheFilter=3DNoCacheFilter, AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,727 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3DNoCacheFilter,[/*],[],15), (F=3Dsafety,[/*],[],15), = (F=3DAM_PROXY_FILTER,[*.html, *.jsp],[],15), = (F=3DAM_PROXY_FILTER,[/stacks],[],15), = (F=3DAM_PROXY_FILTER,[/logLevel],[],15), = (F=3DAM_PROXY_FILTER,[/metrics],[],15), = (F=3DAM_PROXY_FILTER,[/jmx],[],15), (F=3DAM_PROXY_FILTER,[/conf],[],15), = (F=3DAM_PROXY_FILTER,[/mapreduce/*],[],15), = (F=3DAM_PROXY_FILTER,[/ws/*],[],15), (F=3Dguice,[/*],[],15)]=0A= 2015-02-21 19:01:18,727 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,727 DEBUG [main] org.mortbay.log: = servletPathMap=3D{*.XSP=3Djsp, *.jsp=3Djsp, *.jspx=3Djsp, *.JSPF=3Djsp, = /conf=3Dconf, /=3Ddefault, *.xsp=3Djsp, /stacks=3Dstacks, = /logLevel=3DlogLevel, *.JSPX=3Djsp, *.jspf=3Djsp, /metrics=3Dmetrics, = /jmx=3Djmx, *.JSP=3Djsp}=0A= 2015-02-21 19:01:18,727 DEBUG [main] org.mortbay.log: = servletNameMap=3D{jsp=3Djsp, default=3Ddefault, jmx=3Djmx, = metrics=3Dmetrics, logLevel=3DlogLevel, conf=3Dconf, stacks=3Dstacks}=0A= 2015-02-21 19:01:18,728 DEBUG [main] org.mortbay.log: starting = ServletHandler@23f3e3fd=0A= 2015-02-21 19:01:18,728 DEBUG [main] org.mortbay.log: started = ServletHandler@23f3e3fd=0A= 2015-02-21 19:01:18,728 DEBUG [main] org.mortbay.log: starting = SecurityHandler@60fd097b=0A= 2015-02-21 19:01:18,728 DEBUG [main] org.mortbay.log: started = SecurityHandler@60fd097b=0A= 2015-02-21 19:01:18,728 DEBUG [main] org.mortbay.log: starting = SessionHandler@4799bfc=0A= 2015-02-21 19:01:18,728 DEBUG [main] org.mortbay.log: started = SessionHandler@4799bfc=0A= 2015-02-21 19:01:18,728 DEBUG [main] org.mortbay.log: starting = org.mortbay.jetty.webapp.WebAppContext@ffaf13d{/,jar:file:/opt/cloudera/p= arcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.0.= jar!/webapps/mapreduce}=0A= 2015-02-21 19:01:18,728 DEBUG [main] org.mortbay.log: starting = ErrorPageErrorHandler@6911a11b=0A= 2015-02-21 19:01:18,728 DEBUG [main] org.mortbay.log: started = ErrorPageErrorHandler@6911a11b=0A= 2015-02-21 19:01:18,728 DEBUG [main] org.mortbay.log: loaded class = org.apache.hadoop.http.NoCacheFilter from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:18,728 DEBUG [main] org.mortbay.log: Holding class = org.apache.hadoop.http.NoCacheFilter=0A= 2015-02-21 19:01:18,729 DEBUG [main] org.mortbay.log: started = NoCacheFilter=0A= 2015-02-21 19:01:18,729 DEBUG [main] org.mortbay.log: loaded class = org.apache.hadoop.http.HttpServer2$QuotingInputFilter from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:18,729 DEBUG [main] org.mortbay.log: Holding class = org.apache.hadoop.http.HttpServer2$QuotingInputFilter=0A= 2015-02-21 19:01:18,731 DEBUG [main] org.mortbay.log: started safety=0A= 2015-02-21 19:01:18,731 DEBUG [main] org.mortbay.log: loaded class = org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:18,731 DEBUG [main] org.mortbay.log: Holding class = org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter=0A= 2015-02-21 19:01:18,733 DEBUG [main] org.mortbay.log: loaded class = org.apache.commons.logging.impl.Log4JLogger=0A= 2015-02-21 19:01:18,733 DEBUG [main] org.mortbay.log: loaded class = org.apache.commons.logging.impl.Log4JLogger from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:18,733 DEBUG [main] org.mortbay.log: started = AM_PROXY_FILTER=0A= 2015-02-21 19:01:18,733 DEBUG [main] org.mortbay.log: loaded class = com.google.inject.servlet.GuiceFilter from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:18,733 DEBUG [main] org.mortbay.log: Holding class = com.google.inject.servlet.GuiceFilter=0A= 2015-02-21 19:01:18,735 DEBUG [main] org.mortbay.log: started guice=0A= 2015-02-21 19:01:18,736 DEBUG [main] org.mortbay.log: started conf=0A= 2015-02-21 19:01:18,736 DEBUG [main] org.mortbay.log: started stacks=0A= 2015-02-21 19:01:18,736 DEBUG [main] org.mortbay.log: started jmx=0A= 2015-02-21 19:01:18,736 DEBUG [main] org.mortbay.log: started logLevel=0A= 2015-02-21 19:01:18,736 DEBUG [main] org.mortbay.log: started metrics=0A= 2015-02-21 19:01:18,737 DEBUG [main] org.mortbay.log: loaded class = org.apache.jasper.servlet.JspServlet from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:18,737 DEBUG [main] org.mortbay.log: Holding class = org.apache.jasper.servlet.JspServlet=0A= 2015-02-21 19:01:18,767 DEBUG [main] = org.apache.jasper.compiler.JspRuntimeContext: Parent class loader is: = ContextLoader@mapreduce([]) / sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:18,768 DEBUG [main] = org.apache.jasper.compiler.JspRuntimeContext: Compilation classpath = initialized: /tmp/Jetty_0_0_0_0_51221_mapreduce____6xjp50/jsp:null=0A= 2015-02-21 19:01:18,769 DEBUG [main] = org.apache.jasper.servlet.JspServlet: Scratch dir for the JSP engine is: = /tmp/Jetty_0_0_0_0_51221_mapreduce____6xjp50/jsp=0A= 2015-02-21 19:01:18,769 DEBUG [main] = org.apache.jasper.servlet.JspServlet: IMPORTANT: Do not modify the = generated servlets=0A= 2015-02-21 19:01:18,769 DEBUG [main] org.mortbay.log: started jsp=0A= 2015-02-21 19:01:18,769 DEBUG [main] org.mortbay.log: loaded class = org.mortbay.jetty.servlet.DefaultServlet=0A= 2015-02-21 19:01:18,769 DEBUG [main] org.mortbay.log: loaded class = org.mortbay.jetty.servlet.DefaultServlet from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:18,769 DEBUG [main] org.mortbay.log: Holding class = org.mortbay.jetty.servlet.DefaultServlet=0A= 2015-02-21 19:01:18,776 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.servlet.DefaultServlet$NIOResourceCache@fcb1408=0A= 2015-02-21 19:01:18,776 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.ResourceCache@5d902151=0A= 2015-02-21 19:01:18,776 DEBUG [main] org.mortbay.log: resource base =3D = file:/tmp/Jetty_0_0_0_0_51221_mapreduce____6xjp50/webapp/=0A= 2015-02-21 19:01:18,776 DEBUG [main] org.mortbay.log: started default=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.webapp.WebAppContext@ffaf13d{/,jar:file:/opt/cloudera/p= arcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.0.= jar!/webapps/mapreduce}=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: Container = org.mortbay.jetty.servlet.Context@4682981{/static,jar:file:/opt/cloudera/= parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.0= .jar!/webapps/static} + ErrorHandler@217b7cd4 as errorHandler=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: = filterNameMap=3D{safety=3Dsafety, AM_PROXY_FILTER=3DAM_PROXY_FILTER}=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: = pathFilters=3D[(F=3Dsafety,[/*],[],15), (F=3DAM_PROXY_FILTER,[/*],[],15)]=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: = servletFilterMap=3Dnull=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: = servletPathMap=3D{/*=3Dorg.mortbay.jetty.servlet.DefaultServlet-102072255= 9}=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: = servletNameMap=3D{org.mortbay.jetty.servlet.DefaultServlet-1020722559=3Do= rg.mortbay.jetty.servlet.DefaultServlet-1020722559}=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: starting = ServletHandler@527cd669=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: started = ServletHandler@527cd669=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: starting = org.mortbay.jetty.servlet.Context@4682981{/static,jar:file:/opt/cloudera/= parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.0= .jar!/webapps/static}=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: starting = ErrorHandler@217b7cd4=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: started = ErrorHandler@217b7cd4=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: Holding class = org.apache.hadoop.http.HttpServer2$QuotingInputFilter=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: started safety=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: Holding class = org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: started = AM_PROXY_FILTER=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: Holding class = org.mortbay.jetty.servlet.DefaultServlet=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.servlet.DefaultServlet-1020722559=0A= 2015-02-21 19:01:18,777 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.servlet.Context@4682981{/static,jar:file:/opt/cloudera/= parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/hadoop-yarn-common-2.5.0-cdh5.3.0= .jar!/webapps/static}=0A= 2015-02-21 19:01:18,778 DEBUG [main] org.mortbay.log: starting = ContextHandlerCollection@3d85a0b9=0A= 2015-02-21 19:01:18,778 DEBUG [main] org.mortbay.log: started = ContextHandlerCollection@3d85a0b9=0A= 2015-02-21 19:01:18,778 DEBUG [main] org.mortbay.log: starting = Server@458ba94d=0A= 2015-02-21 19:01:18,782 DEBUG [main] org.mortbay.log: started = org.mortbay.jetty.nio.SelectChannelConnector$1@fdbf8f6=0A= 2015-02-21 19:01:18,785 INFO [main] org.mortbay.log: Started = HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:51221=0A= 2015-02-21 19:01:18,785 DEBUG [main] org.mortbay.log: started = HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:51221=0A= 2015-02-21 19:01:18,785 DEBUG [main] org.mortbay.log: started = Server@458ba94d=0A= 2015-02-21 19:01:18,785 INFO [main] = org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce started at = 51221=0A= 2015-02-21 19:01:18,923 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /([])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#index=0A= 2015-02-21 19:01:18,926 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,934 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,935 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /app([])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#index=0A= 2015-02-21 19:01:18,935 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,935 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,935 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /job([:job.id])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#job=0A= 2015-02-21 19:01:18,936 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,936 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,936 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /conf([:job.id])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#conf=0A= 2015-02-21 19:01:18,936 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,936 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,936 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding = /jobcounters([:job.id])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#jobCounters=0A= 2015-02-21 19:01:18,936 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,936 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,936 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /singlejobcounter([:job.id, = :counter.group, :counter.name])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#singleJobCounter=0A= 2015-02-21 19:01:18,937 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,937 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,937 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /tasks([:job.id, = :task.type, :task.state])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#tasks=0A= 2015-02-21 19:01:18,937 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,937 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,938 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /attempts([:job.id, = :task.type, :attempt.state])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#attempts=0A= 2015-02-21 19:01:18,938 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,938 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,938 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding /task([:task.id])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#task=0A= 2015-02-21 19:01:18,938 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,938 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,938 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding = /taskcounters([:task.id])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#taskCounters=0A= 2015-02-21 19:01:18,938 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,938 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,938 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: adding = /singletaskcounter([:task.id, :counter.group, :counter.name])->class = org.apache.hadoop.mapreduce.v2.app.webapp.AppController#singleTaskCounter=0A= 2015-02-21 19:01:18,938 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: trying: = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:18,938 DEBUG [main] = org.apache.hadoop.yarn.webapp.Router: found = org.apache.hadoop.mapreduce.v2.app.webapp.AppView=0A= 2015-02-21 19:01:19,160 INFO [main] = org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules=0A= 2015-02-21 19:01:19,162 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapreduce.v2.app.client.MRClientService is started=0A= 2015-02-21 19:01:19,162 DEBUG [main] = org.apache.hadoop.service.CompositeService: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: starting services, = size=3D7=0A= 2015-02-21 19:01:19,163 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service Dispatcher is started=0A= 2015-02-21 19:01:19,164 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = AM_STARTED=0A= 2015-02-21 19:01:19,164 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = JOB_SUBMITTED=0A= 2015-02-21 19:01:19,164 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = JOB_CREATE=0A= 2015-02-21 19:01:19,164 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service CommitterEventHandler = is started=0A= 2015-02-21 19:01:19,166 DEBUG [main] org.apache.hadoop.ipc.Server: = rpcKind=3DRPC_WRITABLE, rpcRequestWrapperClass=3Dclass = org.apache.hadoop.ipc.WritableRpcEngine$Invocation, = rpcInvoker=3Dorg.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcIn= voker@6b46e91a=0A= 2015-02-21 19:01:19,167 INFO [main] = org.apache.hadoop.ipc.CallQueueManager: Using callQueue class = java.util.concurrent.LinkedBlockingQueue=0A= 2015-02-21 19:01:19,167 DEBUG [main] org.apache.hadoop.ipc.Server: TOKEN = authentication enabled for secret manager=0A= 2015-02-21 19:01:19,167 DEBUG [main] org.apache.hadoop.ipc.Server: = Server accepts auth methods:[TOKEN, SIMPLE]=0A= 2015-02-21 19:01:19,168 INFO [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 35954=0A= 2015-02-21 19:01:19,168 DEBUG [main] = org.apache.hadoop.ipc.metrics.RpcMetrics: Initialized = MetricsRegistry{info=3DMetricsInfoImpl{name=3Drpc, description=3Drpc}, = tags=3D[MetricsTag{info=3DMetricsInfoImpl{name=3Dport, description=3DRPC = port}, value=3D35954}], metrics=3D[]}=0A= 2015-02-21 19:01:19,168 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterLong = org.apache.hadoop.ipc.metrics.RpcMetrics.receivedBytes with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of received bytes], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:19,169 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterLong = org.apache.hadoop.ipc.metrics.RpcMetrics.sentBytes with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of sent bytes], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:19,169 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRate = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcQueueTime with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Queue time], about=3D, type=3DDEFAULT, always=3Dfalse, = sampleName=3DOps)=0A= 2015-02-21 19:01:19,169 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRate = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcProcessingTime with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Processsing time], about=3D, type=3DDEFAULT, always=3Dfalse, = sampleName=3DOps)=0A= 2015-02-21 19:01:19,169 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthenticationFailures with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authentication failures], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:19,169 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthenticationSuccesses with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authentication successes], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:19,169 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthorizationFailures with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authorization failures], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:19,169 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableCounterInt = org.apache.hadoop.ipc.metrics.RpcMetrics.rpcAuthorizationSuccesses with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of authorization sucesses], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:19,170 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: method public int = org.apache.hadoop.ipc.metrics.RpcMetrics.numOpenConnections() with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Number of open connections], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:19,170 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: method public int = org.apache.hadoop.ipc.metrics.RpcMetrics.callQueueLength() with = annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[Length of the call queue], about=3D, type=3DDEFAULT, = always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:19,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = RpcActivityForPort35954, Aggregate RPC metrics=0A= 2015-02-21 19:01:19,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 19:01:19,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 19:01:19,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 19:01:19,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 19:01:19,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D15=0A= 2015-02-21 19:01:19,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 19:01:19,170 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DRPC port, = name=3Dtag.port, type=3Djava.lang.String, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLocal hostname, = name=3Dtag.Hostname, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of received = bytes, name=3DReceivedBytes, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of sent bytes, = name=3DSentBytes, type=3Djava.lang.Long, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for = queue time, name=3DRpcQueueTimeNumOps, type=3Djava.lang.Long, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for queue = time, name=3DRpcQueueTimeAvgTime, type=3Djava.lang.Double, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of ops for = processsing time, name=3DRpcProcessingTimeNumOps, type=3Djava.lang.Long, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DAverage time for = processsing time, name=3DRpcProcessingTimeAvgTime, = type=3Djava.lang.Double, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authentication failures, name=3DRpcAuthenticationFailures, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authentication successes, name=3DRpcAuthenticationSuccesses, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authorization failures, name=3DRpcAuthorizationFailures, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of = authorization sucesses, name=3DRpcAuthorizationSuccesses, = type=3Djava.lang.Integer, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DNumber of open = connections, name=3DNumOpenConnections, type=3Djava.lang.Integer, = read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLength of the call = queue, name=3DCallQueueLength, type=3Djava.lang.Integer, read-only, = descriptor=3D{}]]=0A= 2015-02-21 19:01:19,171 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 19:01:19,171 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DRpcActivityForPort35954=0A= 2015-02-21 19:01:19,171 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = RpcActivityForPort35954 registered.=0A= 2015-02-21 19:01:19,171 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Registered source = RpcActivityForPort35954=0A= 2015-02-21 19:01:19,171 DEBUG [main] = org.apache.hadoop.ipc.metrics.RpcDetailedMetrics: = MetricsInfoImpl{name=3Drpcdetailed, description=3Drpcdetailed}=0A= 2015-02-21 19:01:19,171 DEBUG [main] = org.apache.hadoop.metrics2.lib.MutableMetricsFactory: field = org.apache.hadoop.metrics2.lib.MutableRates = org.apache.hadoop.ipc.metrics.RpcDetailedMetrics.rates with annotation = @org.apache.hadoop.metrics2.annotation.Metric(valueName=3DTime, = value=3D[], about=3D, type=3DDEFAULT, always=3Dfalse, sampleName=3DOps)=0A= 2015-02-21 19:01:19,171 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = RpcDetailedActivityForPort35954, Per method RPC metrics=0A= 2015-02-21 19:01:19,171 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: source.source.start_mbeans=0A= 2015-02-21 19:01:19,171 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'MetricsConfig' for key: source.start_mbeans=0A= 2015-02-21 19:01:19,171 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsConfig: poking parent = 'PropertiesConfiguration' for key: *.source.start_mbeans=0A= 2015-02-21 19:01:19,171 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating attr = cache...=0A= 2015-02-21 19:01:19,171 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done. # tags & = metrics=3D3=0A= 2015-02-21 19:01:19,171 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Updating info = cache...=0A= 2015-02-21 19:01:19,174 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: = [javax.management.MBeanAttributeInfo[description=3DRPC port, = name=3Dtag.port, type=3Djava.lang.String, read-only, descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DMetrics context, = name=3Dtag.Context, type=3Djava.lang.String, read-only, = descriptor=3D{}], = javax.management.MBeanAttributeInfo[description=3DLocal hostname, = name=3Dtag.Hostname, type=3Djava.lang.String, read-only, = descriptor=3D{}]]=0A= 2015-02-21 19:01:19,174 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: Done=0A= 2015-02-21 19:01:19,175 DEBUG [main] = org.apache.hadoop.metrics2.util.MBeans: Registered = Hadoop:service=3DMRAppMaster,name=3DRpcDetailedActivityForPort35954=0A= 2015-02-21 19:01:19,175 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source = RpcDetailedActivityForPort35954 registered.=0A= 2015-02-21 19:01:19,175 DEBUG [main] = org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Registered source = RpcDetailedActivityForPort35954=0A= 2015-02-21 19:01:19,175 DEBUG [main] org.apache.hadoop.ipc.Server: = RpcKind =3D RPC_PROTOCOL_BUFFER Protocol Name =3D = org.apache.hadoop.ipc.ProtocolMetaInfoPB version=3D1 = ProtocolImpl=3Dorg.apache.hadoop.ipc.protobuf.ProtocolInfoProtos$Protocol= InfoService$2 protocolClass=3Dorg.apache.hadoop.ipc.ProtocolMetaInfoPB=0A= 2015-02-21 19:01:19,177 DEBUG [main] org.apache.hadoop.ipc.Server: = RpcKind =3D RPC_WRITABLE Protocol Name =3D = org.apache.hadoop.mapred.TaskUmbilicalProtocol version=3D19 = ProtocolImpl=3Dorg.apache.hadoop.mapred.TaskAttemptListenerImpl = protocolClass=3Dorg.apache.hadoop.mapred.TaskUmbilicalProtocol=0A= 2015-02-21 19:01:19,177 INFO [IPC Server Responder] = org.apache.hadoop.ipc.Server: IPC Server Responder: starting=0A= 2015-02-21 19:01:19,177 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 35954: starting=0A= 2015-02-21 19:01:19,177 INFO [IPC Server listener on 35954] = org.apache.hadoop.ipc.Server: IPC Server listener on 35954: starting=0A= 2015-02-21 19:01:19,177 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 35954: starting=0A= 2015-02-21 19:01:19,181 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: starting=0A= 2015-02-21 19:01:19,181 DEBUG [IPC Server handler 3 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 3 on 35954: starting=0A= 2015-02-21 19:01:19,181 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 5 on 35954: starting=0A= 2015-02-21 19:01:19,181 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: starting=0A= 2015-02-21 19:01:19,181 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: starting=0A= 2015-02-21 19:01:19,181 DEBUG [IPC Server handler 7 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 7 on 35954: starting=0A= 2015-02-21 19:01:19,182 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: starting=0A= 2015-02-21 19:01:19,182 DEBUG [IPC Server handler 10 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 10 on 35954: starting=0A= 2015-02-21 19:01:19,182 DEBUG [IPC Server handler 9 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 9 on 35954: starting=0A= 2015-02-21 19:01:19,182 DEBUG [IPC Server handler 12 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 12 on 35954: starting=0A= 2015-02-21 19:01:19,182 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 35954: starting=0A= 2015-02-21 19:01:19,182 DEBUG [IPC Server handler 13 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 13 on 35954: starting=0A= 2015-02-21 19:01:19,182 DEBUG [IPC Server handler 15 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 15 on 35954: starting=0A= 2015-02-21 19:01:19,183 DEBUG [IPC Server handler 16 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 16 on 35954: starting=0A= 2015-02-21 19:01:19,182 DEBUG [IPC Server handler 14 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 14 on 35954: starting=0A= 2015-02-21 19:01:19,183 DEBUG [IPC Server handler 18 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 18 on 35954: starting=0A= 2015-02-21 19:01:19,183 DEBUG [IPC Server handler 17 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 17 on 35954: starting=0A= 2015-02-21 19:01:19,183 DEBUG [IPC Server handler 19 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 19 on 35954: starting=0A= 2015-02-21 19:01:19,183 DEBUG [IPC Server handler 20 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 20 on 35954: starting=0A= 2015-02-21 19:01:19,183 DEBUG [IPC Server handler 22 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 22 on 35954: starting=0A= 2015-02-21 19:01:19,183 DEBUG [IPC Server handler 21 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 21 on 35954: starting=0A= 2015-02-21 19:01:19,183 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 35954: starting=0A= 2015-02-21 19:01:19,184 DEBUG [IPC Server handler 25 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 25 on 35954: starting=0A= 2015-02-21 19:01:19,184 DEBUG [IPC Server handler 24 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 24 on 35954: starting=0A= 2015-02-21 19:01:19,184 DEBUG [IPC Server handler 27 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 27 on 35954: starting=0A= 2015-02-21 19:01:19,184 DEBUG [IPC Server handler 26 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 26 on 35954: starting=0A= 2015-02-21 19:01:19,184 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 35954: starting=0A= 2015-02-21 19:01:19,184 DEBUG [main] = org.apache.hadoop.service.CompositeService: = org.apache.hadoop.mapred.TaskAttemptListenerImpl: starting services, = size=3D1=0A= 2015-02-21 19:01:19,185 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service TaskHeartbeatHandler = is started=0A= 2015-02-21 19:01:19,185 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapred.TaskAttemptListenerImpl is started=0A= 2015-02-21 19:01:19,185 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$StagingDirCleaningService = is started=0A= 2015-02-21 19:01:19,191 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 35954: starting=0A= 2015-02-21 19:01:19,201 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: RMCommunicator = entered state INITED=0A= 2015-02-21 19:01:19,202 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = nodeBlacklistingEnabled:true=0A= 2015-02-21 19:01:19,202 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = maxTaskFailuresPerNode is 3=0A= 2015-02-21 19:01:19,202 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = blacklistDisablePercent is 33=0A= 2015-02-21 19:01:19,271 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.require.client.cert; Ignoring.=0A= 2015-02-21 19:01:19,272 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.retry.interval; = Ignoring.=0A= 2015-02-21 19:01:19,272 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.client.conf; Ignoring.=0A= 2015-02-21 19:01:19,273 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.keystores.factory.class; Ignoring.=0A= 2015-02-21 19:01:19,274 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: hadoop.ssl.server.conf; Ignoring.=0A= 2015-02-21 19:01:19,278 WARN [main] = org.apache.hadoop.conf.Configuration: job.xml:an attempt to override = final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.=0A= 2015-02-21 19:01:19,281 INFO [main] = org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at = hadoop0.rdpratti.com/192.168.2.253:8030=0A= 2015-02-21 19:01:19,282 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.RMProxy.getProxy(RMProxy.java:136)=0A= 2015-02-21 19:01:19,282 DEBUG [main] org.apache.hadoop.yarn.ipc.YarnRPC: = Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC=0A= 2015-02-21 19:01:19,282 DEBUG [main] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ApplicationMasterProtocol=0A= 2015-02-21 19:01:19,290 DEBUG [main] org.apache.hadoop.ipc.Client: = getting client out of cache: org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:19,321 DEBUG [main] org.apache.hadoop.ipc.Client: The = ping interval is 60000 ms.=0A= 2015-02-21 19:01:19,321 DEBUG [main] org.apache.hadoop.ipc.Client: = Connecting to hadoop0.rdpratti.com/192.168.2.253:8030=0A= 2015-02-21 19:01:19,322 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:19,322 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:19,325 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"rHx6NeRp8bF44sZBE9l68un0MwlvlLm+DrNy8TGv\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:19,331 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB = info:org.apache.hadoop.yarn.security.SchedulerSecurityInfo$1@ff80080=0A= 2015-02-21 19:01:19,332 DEBUG [main] = org.apache.hadoop.yarn.security.AMRMTokenSelector: Looking for a token = with service 192.168.2.253:8030=0A= 2015-02-21 19:01:19,332 DEBUG [main] = org.apache.hadoop.yarn.security.AMRMTokenSelector: Token kind is = YARN_AM_RM_TOKEN and the token's service name is 192.168.2.253:8030=0A= 2015-02-21 19:01:19,337 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:19,340 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ApplicationMasterProtocolPB=0A= 2015-02-21 19:01:19,340 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: AAABS63OA3sAAAACAAAAAV8/5o8=3D=0A= 2015-02-21 19:01:19,341 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:19,341 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:19,342 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAV8/5o8=3D\",realm=3D\"= default\",nonce=3D\"rHx6NeRp8bF44sZBE9l68un0MwlvlLm+DrNy8TGv\",nc=3D00000= 001,cnonce=3D\"Pruo19ZHc6ZQtgIgGgkBicCdAwutn0jQFLMVDgw+\",digest-uri=3D\"= /default\",maxbuf=3D65536,response=3D0cb718c9a13562bfbe1803b211ace4c4,qop= =3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:19,345 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D2a49fd8d671d2d78306188fe2b8ba06a"=0A= =0A= 2015-02-21 19:01:19,346 DEBUG [main] org.apache.hadoop.ipc.Client: = Negotiated QOP is :auth=0A= 2015-02-21 19:01:19,349 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #10=0A= 2015-02-21 19:01:19,356 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera: starting, having = connections 2=0A= 2015-02-21 19:01:19,371 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #10=0A= 2015-02-21 19:01:19,371 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: registerApplicationMaster = took 52ms=0A= 2015-02-21 19:01:19,431 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = maxContainerCapability: =0A= 2015-02-21 19:01:19,431 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: = root.cloudera=0A= 2015-02-21 19:01:19,433 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = JOB_QUEUE_CHANGED=0A= 2015-02-21 19:01:19,433 DEBUG [IPC Server listener on 59910] = org.apache.hadoop.ipc.Server: Server connection from = 192.168.2.253:57473; # active connections: 1; # queued calls: 0=0A= 2015-02-21 19:01:19,434 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service RMCommunicator is = started=0A= 2015-02-21 19:01:19,434 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter = is started=0A= 2015-02-21 19:01:19,436 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl = entered state INITED=0A= 2015-02-21 19:01:19,439 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #-3=0A= 2015-02-21 19:01:19,443 INFO [main] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper = limit on the thread pool size is 500=0A= 2015-02-21 19:01:19,444 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: Successfully authorized userInfo {=0A= effectiveUser: "cloudera"=0A= }=0A= protocol: "org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB"=0A= =0A= 2015-02-21 19:01:19,444 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #66=0A= 2015-02-21 19:01:19,444 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#66 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:19,446 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:19,460 INFO [main] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = yarn.client.max-cached-nodemanagers-proxies : 0=0A= 2015-02-21 19:01:19,460 DEBUG [main] org.apache.hadoop.yarn.ipc.YarnRPC: = Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC=0A= 2015-02-21 19:01:19,468 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl is = started=0A= 2015-02-21 19:01:19,468 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter = is started=0A= 2015-02-21 19:01:19,477 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = JobHistoryEventHandler is started=0A= 2015-02-21 19:01:19,478 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster is started=0A= 2015-02-21 19:01:19,478 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobStartEvent.EventType: = JOB_START=0A= 2015-02-21 19:01:19,478 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_START=0A= 2015-02-21 19:01:19,483 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist: masked=3Drw-r--r--=0A= 2015-02-21 19:01:19,485 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: = job_1424550134651_0002Job Transitioned from INITED to SETUP=0A= 2015-02-21 19:01:19,485 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = JOB_INITED=0A= 2015-02-21 19:01:19,485 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = JOB_INFO_CHANGED=0A= 2015-02-21 19:01:19,485 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.commit.CommitterJobSetupEvent.EventTyp= e: JOB_SETUP=0A= 2015-02-21 19:01:19,495 INFO [CommitterEvent Processor #0] = org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: = Processing the event EventType: JOB_SETUP=0A= 2015-02-21 19:01:19,497 DEBUG [CommitterEvent Processor #0] = org.apache.hadoop.hdfs.DFSClient: = /user/cloudera/wordlengths4/_temporary/1: masked=3Drwxr-xr-x=0A= 2015-02-21 19:01:19,499 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #12=0A= 2015-02-21 19:01:19,501 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #11=0A= 2015-02-21 19:01:19,542 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #12=0A= 2015-02-21 19:01:19,542 DEBUG [CommitterEvent Processor #0] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: mkdirs took 44ms=0A= 2015-02-21 19:01:19,545 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobSetupCompletedEvent.Event= Type: JOB_SETUP_COMPLETED=0A= 2015-02-21 19:01:19,545 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_SETUP_COMPLETED=0A= 2015-02-21 19:01:19,546 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: = job_1424550134651_0002Job Transitioned from SETUP to RUNNING=0A= 2015-02-21 19:01:19,546 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskEvent.EventType: = T_SCHEDULE=0A= 2015-02-21 19:01:19,546 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_m_000000 of type T_SCHEDULE=0A= 2015-02-21 19:01:19,571 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #11=0A= 2015-02-21 19:01:19,571 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: create took 74ms=0A= 2015-02-21 19:01:19,575 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: computePacketChunkSize: = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, chunkSize=3D516, chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 19:01:19,621 DEBUG = [LeaseRenewer:cloudera@hadoop0.rdpratti.com:8020] = org.apache.hadoop.hdfs.LeaseRenewer: Lease renewer daemon for = [DFSClient_NONMAPREDUCE_-907115631_1] with renew id 1 started=0A= 2015-02-21 19:01:19,651 INFO [AsyncDispatcher event handler] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop0.rdpratti.com = to /default=0A= 2015-02-21 19:01:19,652 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Created attempt = attempt_1424550134651_0002_m_000000_0=0A= 2015-02-21 19:01:19,655 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_m_000000 Task Transitioned from NEW to SCHEDULED=0A= 2015-02-21 19:01:19,655 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskEvent.EventType: = T_SCHEDULE=0A= 2015-02-21 19:01:19,655 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_m_000001 of type T_SCHEDULE=0A= 2015-02-21 19:01:19,655 INFO [AsyncDispatcher event handler] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop0.rdpratti.com = to /default=0A= 2015-02-21 19:01:19,655 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Created attempt = attempt_1424550134651_0002_m_000001_0=0A= 2015-02-21 19:01:19,660 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_m_000001 Task Transitioned from NEW to SCHEDULED=0A= 2015-02-21 19:01:19,660 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskEvent.EventType: = T_SCHEDULE=0A= 2015-02-21 19:01:19,661 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_m_000002 of type T_SCHEDULE=0A= 2015-02-21 19:01:19,661 INFO [AsyncDispatcher event handler] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop0.rdpratti.com = to /default=0A= 2015-02-21 19:01:19,661 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Created attempt = attempt_1424550134651_0002_m_000002_0=0A= 2015-02-21 19:01:19,661 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_m_000002 Task Transitioned from NEW to SCHEDULED=0A= 2015-02-21 19:01:19,661 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskEvent.EventType: = T_SCHEDULE=0A= 2015-02-21 19:01:19,661 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_m_000003 of type T_SCHEDULE=0A= 2015-02-21 19:01:19,667 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 115 = procesingTime=3D 108=0A= 2015-02-21 19:01:19,672 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#66 Retry#0=0A= 2015-02-21 19:01:19,675 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#66 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:19,675 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #67=0A= 2015-02-21 19:01:19,676 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#67 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:19,677 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:19,748 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D0, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, bytesCurBlock=3D0=0A= 2015-02-21 19:01:19,754 INFO [AsyncDispatcher event handler] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop3.rdpratti.com = to /default=0A= 2015-02-21 19:01:19,754 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Created attempt = attempt_1424550134651_0002_m_000003_0=0A= 2015-02-21 19:01:19,754 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_m_000003 Task Transitioned from NEW to SCHEDULED=0A= 2015-02-21 19:01:19,755 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskEvent.EventType: = T_SCHEDULE=0A= 2015-02-21 19:01:19,755 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_m_000004 of type T_SCHEDULE=0A= 2015-02-21 19:01:19,790 INFO [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event = Writer setup for JobId: job_1424550134651_0002, File: = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:19,791 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1_c= onf.xml: masked=3Drw-r--r--=0A= 2015-02-21 19:01:19,794 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #13=0A= 2015-02-21 19:01:19,817 INFO [AsyncDispatcher event handler] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop1.rdpratti.com = to /default=0A= 2015-02-21 19:01:19,817 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Created attempt = attempt_1424550134651_0002_m_000004_0=0A= 2015-02-21 19:01:19,817 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_m_000004 Task Transitioned from NEW to SCHEDULED=0A= 2015-02-21 19:01:19,817 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskEvent.EventType: = T_SCHEDULE=0A= 2015-02-21 19:01:19,817 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000000 of type T_SCHEDULE=0A= 2015-02-21 19:01:19,818 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Created attempt = attempt_1424550134651_0002_r_000000_0=0A= 2015-02-21 19:01:19,818 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_r_000000 Task Transitioned from NEW to SCHEDULED=0A= 2015-02-21 19:01:19,818 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskEvent.EventType: = T_SCHEDULE=0A= 2015-02-21 19:01:19,818 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000001 of type T_SCHEDULE=0A= 2015-02-21 19:01:19,818 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Created attempt = attempt_1424550134651_0002_r_000001_0=0A= 2015-02-21 19:01:19,818 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_r_000001 Task Transitioned from NEW to SCHEDULED=0A= 2015-02-21 19:01:19,818 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskEvent.EventType: = T_SCHEDULE=0A= 2015-02-21 19:01:19,818 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000002 of type T_SCHEDULE=0A= 2015-02-21 19:01:19,819 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Created attempt = attempt_1424550134651_0002_r_000002_0=0A= 2015-02-21 19:01:19,819 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_r_000002 Task Transitioned from NEW to SCHEDULED=0A= 2015-02-21 19:01:19,819 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 2 = procesingTime=3D 142=0A= 2015-02-21 19:01:19,820 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#67 Retry#0=0A= 2015-02-21 19:01:19,820 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#67 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:19,820 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #13=0A= 2015-02-21 19:01:19,820 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: create took 29ms=0A= 2015-02-21 19:01:19,820 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: computePacketChunkSize: = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1_conf.xml, chunkSize=3D516, chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 19:01:19,821 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskEvent.EventType: = T_SCHEDULE=0A= 2015-02-21 19:01:19,821 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000003 of type T_SCHEDULE=0A= 2015-02-21 19:01:19,821 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Created attempt = attempt_1424550134651_0002_r_000003_0=0A= 2015-02-21 19:01:19,821 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_r_000003 Task Transitioned from NEW to SCHEDULED=0A= 2015-02-21 19:01:19,821 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_SCHEDULE=0A= 2015-02-21 19:01:19,822 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000000_0 of type TA_SCHEDULE=0A= 2015-02-21 19:01:19,826 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000000_0 TaskAttempt Transitioned from NEW = to UNASSIGNED=0A= 2015-02-21 19:01:19,826 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_STARTED=0A= 2015-02-21 19:01:19,826 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_SCHEDULE=0A= 2015-02-21 19:01:19,826 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000001_0 of type TA_SCHEDULE=0A= 2015-02-21 19:01:19,827 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000001_0 TaskAttempt Transitioned from NEW = to UNASSIGNED=0A= 2015-02-21 19:01:19,827 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_STARTED=0A= 2015-02-21 19:01:19,827 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_SCHEDULE=0A= 2015-02-21 19:01:19,827 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000002_0 of type TA_SCHEDULE=0A= 2015-02-21 19:01:19,827 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000002_0 TaskAttempt Transitioned from NEW = to UNASSIGNED=0A= 2015-02-21 19:01:19,827 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_STARTED=0A= 2015-02-21 19:01:19,827 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_SCHEDULE=0A= 2015-02-21 19:01:19,827 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000003_0 of type TA_SCHEDULE=0A= 2015-02-21 19:01:19,827 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000003_0 TaskAttempt Transitioned from NEW = to UNASSIGNED=0A= 2015-02-21 19:01:19,827 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_STARTED=0A= 2015-02-21 19:01:19,827 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_SCHEDULE=0A= 2015-02-21 19:01:19,828 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000004_0 of type TA_SCHEDULE=0A= 2015-02-21 19:01:19,828 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000004_0 TaskAttempt Transitioned from NEW = to UNASSIGNED=0A= 2015-02-21 19:01:19,828 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_STARTED=0A= 2015-02-21 19:01:19,828 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_SCHEDULE=0A= 2015-02-21 19:01:19,828 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000000_0 of type TA_SCHEDULE=0A= 2015-02-21 19:01:19,828 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000000_0 TaskAttempt Transitioned from NEW = to UNASSIGNED=0A= 2015-02-21 19:01:19,828 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_STARTED=0A= 2015-02-21 19:01:19,828 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_SCHEDULE=0A= 2015-02-21 19:01:19,828 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000001_0 of type TA_SCHEDULE=0A= 2015-02-21 19:01:19,828 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000001_0 TaskAttempt Transitioned from NEW = to UNASSIGNED=0A= 2015-02-21 19:01:19,828 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_STARTED=0A= 2015-02-21 19:01:19,829 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_SCHEDULE=0A= 2015-02-21 19:01:19,829 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000002_0 of type TA_SCHEDULE=0A= 2015-02-21 19:01:19,829 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000002_0 TaskAttempt Transitioned from NEW = to UNASSIGNED=0A= 2015-02-21 19:01:19,829 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_STARTED=0A= 2015-02-21 19:01:19,829 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_SCHEDULE=0A= 2015-02-21 19:01:19,829 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000003_0 of type TA_SCHEDULE=0A= 2015-02-21 19:01:19,829 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000003_0 TaskAttempt Transitioned from NEW = to UNASSIGNED=0A= 2015-02-21 19:01:19,829 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_STARTED=0A= 2015-02-21 19:01:19,829 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:19,830 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for all = properties in config...=0A= 2015-02-21 19:01:19,831 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.data.dir=0A= 2015-02-21 19:01:19,831 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.checkpoint.txns=0A= 2015-02-21 19:01:19,831 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3.replication=0A= 2015-02-21 19:01:19,831 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.output.fileoutputformat.compress.type=0A= 2015-02-21 19:01:19,831 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.jobhistory.lru.cache.size=0A= 2015-02-21 19:01:19,831 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.failed.volumes.tolerated=0A= 2015-02-21 19:01:19,831 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.filter.initializers=0A= 2015-02-21 19:01:19,831 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.cluster.temp.dir=0A= 2015-02-21 19:01:19,831 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.keytab=0A= 2015-02-21 19:01:19,831 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.memory.limit.percent=0A= 2015-02-21 19:01:19,831 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.checkpoint.max-retries=0A= 2015-02-21 19:01:19,831 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = nfs.mountd.port=0A= 2015-02-21 19:01:19,831 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.zk-acl=0A= 2015-02-21 19:01:19,831 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.skip.maxgroups=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.https.server.keystore.resource=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.task.container.log.backups=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.domain.socket.path=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.authentication.kerberos.keytab=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.increment-allocation-mb=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.generic-application-history.store-class=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percen= tage=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.block.write.replace-datanode-on-failure.best-effort=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.jar=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.localizer.client.thread-count=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.done-dir=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.framework.name=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.failover-controller.new-active.rpc-timeout.ms=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.health-monitor.check-interval.ms=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.file.buffer.size=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.max.connections=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.path.based.cache.block.map.allocation.percent=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.tmp.dir=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.checkpoint.period=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.mmap.cache.timeout.ms=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.kill.max=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.scheduler.class=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.taskcache.levels=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3.stream-buffer-size=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.secondary.http-address=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.nodemanager-connect.max-wait-ms=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.decommission.interval=0A= 2015-02-21 19:01:19,832 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.http-address=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.files.preserve.failedtasks=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.encrypt.data.transfer=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.ha.enabled=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.address=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.avoid.write.stale.datanode=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.authentication.token.validity=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.group.mapping.ldap.search.filter.group=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.increment-allocation-vcores=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.failover.max.attempts=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.scheduler.monitor.policies=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.crypto.cipher.suite=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.profile.params=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.fs.state-store.retry-policy-spec=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.admin.acl=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.local-cache.max-files-per-directory=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.failover-retries-on-socket-timeouts=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.retrycache.expirytime.millis=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = rpc.engine.org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.nodemanagers.heartbeat-interval-ms=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.failover.connection.retries.on.timeouts=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.failover-proxy-provider=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.sort.spill.percent=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = file.stream-buffer-size=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.webhdfs.enabled=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = zlib.compress.level=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.connection.maxidletime=0A= 2015-02-21 19:01:19,833 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.output.key.class=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.combine.progress.records=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.persist.jobstatus.hours=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.resourcemanager.connect.retry_interval.secs=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.image.transfer.chunksize=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.address=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.ipc.address=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.ha.automatic-failover.embedded=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.recovery.store.fs.uri=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.zk-state-store.parent-path=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.job.task.listener.thread-count=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = nfs.dump.dir=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.list.cache.pools.num.responses=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.read.shortcircuit=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.safemode.extension=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.generic-application-history.fs-history-store.compre= ssion-type=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.zookeeper.parent-znode=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.container-executor.class=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.skip.checksum.errors=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.path.based.cache.refresh.interval.ms=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.user.name=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.scheduler.client.thread-count=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.recovery.dir=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.authentication.kerberos.principal=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.log.level=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.linux-container-executor.nonsecure-mode.user-pattern=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3.maxRetries=0A= 2015-02-21 19:01:19,834 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.resourcemanager.minimum.version=0A= 2015-02-21 19:01:19,838 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.kerberos.kinit.command=0A= 2015-02-21 19:01:19,838 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.log-aggregation.retain-check-interval-seconds=0A= 2015-02-21 19:01:19,838 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.process-kill-wait.ms=0A= 2015-02-21 19:01:19,838 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.linux-container-executor.cgroups.mount=0A= 2015-02-21 19:01:19,838 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.output.key.class=0A= 2015-02-21 19:01:19,838 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.working.dir=0A= 2015-02-21 19:01:19,838 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.name.dir.restore=0A= 2015-02-21 19:01:19,838 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.handler.count=0A= 2015-02-21 19:01:19,838 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.admin.address=0A= 2015-02-21 19:01:19,838 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.client-am.ipc.max-retries=0A= 2015-02-21 19:01:19,838 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.use.datanode.hostname=0A= 2015-02-21 19:01:19,838 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.hue.hosts=0A= 2015-02-21 19:01:19,838 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.util.hash.type=0A= 2015-02-21 19:01:19,838 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.available-space-volume-choosing-policy.balanced-space-prefer= ence-fraction=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.dns.interface=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.seqfile.lazydecompress=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.disk-health-checker.min-healthy-disks=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.max-cached-nodemanagers-proxies=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.maxtaskfailures.per.tracker=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.healthchecker.script.timeout=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.httpfs.groups=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.group.mapping.ldap.search.attr.group.name=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.df.interval=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.crypto.buffer.size=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.kerberos.internal.spnego.principal=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.cached.conn.retry=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.reduce.class=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.map.class=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.reduce.shuffle.consumer.plugin.class=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.address=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.tasks.sleeptimebeforesigkill=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.journalnode.rpc-address=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.fs-limits.max-blocks-per-file=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.acl-view-job=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.mapred.hosts=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.job.committer.cancel-timeout=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.block.write.replace-datanode-on-failure.policy=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.connection-keep-alive.enable=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.replication.interval=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.minicluster.fixed.ports=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.num.checkpoints.retained=0A= 2015-02-21 19:01:19,839 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.scheduler.address=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.http.address=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.admin.acl=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.directoryscan.threads=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.memory.mb=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.group.mapping.ldap.ssl=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.http.policy=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.merge.progress.records=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.heartbeat.interval=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.recovery.enabled=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = net.topology.script.number.args=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.local.clientfactory.class.name=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client-write-packet-size=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.group.mapping.ldap.directory.search.timeout=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.native.lib.available=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.failover.connection.retries=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.disk-health-checker.interval-ms=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.blocksize=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.use.legacy.blockreader.local=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.connection.ssl.enabled=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.webapp.address=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.resource-tracker.client.thread-count=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.failover-retries=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.blockreport.initialDelay=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.aux-services.mapreduce_shuffle.class=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.health-monitor.rpc-timeout.ms=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.zk-timeout-ms=0A= 2015-02-21 19:01:19,840 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.markreset.buffer.percent=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.ha.tail-edits.period=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.admin.user.env=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.client.thread-count=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.health-checker.script.timeout-ms=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = file.bytes-per-checksum=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.replication.max=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.max.extra.edits.segments.retained=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.map.index.skip=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.timeout=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.du.reserved=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.cpu.vcores=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.support.append=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.file-block-storage-locations.num-threads=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ftp.blocksize=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.container-manager.thread-count=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = net.topology.script.file.name=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.server.listen.queue.size=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.amliveliness-monitor.interval-ms=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.ha.automatic-failover.enabled=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.ssl.hostname.verifier=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = nfs.server.port=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.dns.interface=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.fair.preemption=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.group.mapping.ldap.search.attr.member=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.userlog.retain.hours=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.outofband.heartbeat=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.impl=0A= 2015-02-21 19:01:19,841 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.name=0A= 2015-02-21 19:01:19,842 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.resource.memory-mb=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.webhdfs.user.provider.user.pattern=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.delegation.token.renew-interval=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.ssl.keystores.factory.class=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.http.policy=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.sync.behind.writes=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for nfs.wtmax=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.AbstractFileSystem.har.impl=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.read.shortcircuit.skip.checksum=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.random.device.file.path=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.maxattempts=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.webapp.address=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.handler.count=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.ssl.require.client.cert=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ftp.client-write-packet-size=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.write.exclude.nodes.cache.expiry.interval.millis=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.server.tcpnodelay=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.cleaner.enable=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.du.interval=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.retry-delay.max.ms=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.profile.reduces=0A= 2015-02-21 19:01:19,845 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.health-monitor.connect-retry-interval.ms=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.fuse.connection.timeout=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.permissions.superusergroup=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.jobhistory.task.numberprogresssplits=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.ftp.host.port=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.speculative=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.data.dir.perm=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.client.submit.file.replication=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.startup.delay.block.deletion.sec=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3native.blocksize=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.ubertask.maxmaps=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.replication.min=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.cluster.acls.enabled=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.uid.cache.secs=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.webapp.https.address=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = nfs.allow.insecure.ports=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.localizer.fetch.thread-count=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = map.sort.class=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.hue.groups=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.trash.checkpoint.interval=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapred.queue.default.acl-administer-jobs=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.image.transfer.timeout=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.name.dir=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.connect.timeout=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.staging-dir=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.AbstractFileSystem.file.impl=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.env-whitelist=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.keytab=0A= 2015-02-21 19:01:19,846 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.image.compression.codec=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.reduces=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.complete.cancel.delegation.tokens=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.recovery.store.class=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.group.mapping.ldap.search.filter.user=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.enable.retrycache=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.sleep-delay-before-sigkill.ms=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.joblist.cache.size=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.healthchecker.interval=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.heartbeats.in.second=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.auth_to_local=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.persist.jobstatus.dir=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.backup.http-address=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.rpc.protection=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.mmap.enabled=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.container.log.backups=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ftp.stream-buffer-size=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.https-address=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.address=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.ha.log-roll.period=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.recovery.enabled=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.input.fileinputformat.numinputfiles=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.groups.negative-cache.secs=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.admin.client.thread-count=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.fsdatasetcache.max.threads.per.volume=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = file.client-write-packet-size=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.authentication.simple.anonymous.allowed=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.leveldb-timeline-store.path=0A= 2015-02-21 19:01:19,847 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.proxy-user-privileges.enabled=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.drop.cache.behind.reads=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.log.retain-seconds=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.image.transfer.bandwidthPerSec=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.slow.io.warning.threshold.ms=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.instrumentation=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.failover-controller.cli-check.rpc-timeout.ms=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.linux-container-executor.cgroups.hierarchy=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.write.stale.datanode.ratio=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.groups.cache.warn.after.ms=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.fetch.retry.timeout-ms=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.client.thread-count=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.mapfile.bloom.size=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.work-preserving-recovery.enabled=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.ha.fencing.ssh.connect-timeout=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.zk-num-retries=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3.bytes-per-checksum=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.container.log.limit.kb=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.edit.log.autoroll.check.interval.ms=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.automatic.close=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.trash.interval=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.journalnode.https-address=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.ttl-ms=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.authentication=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.defaultFS=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.generic-application-history.enabled=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for nfs.rtmax=0A= 2015-02-21 19:01:19,848 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.ssl.server.conf=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.connect.max.retries=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = "hadoop.security.kms.client.encrypted.key.cache.expiry=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.delayed.delegation-token.removal-interval-ms=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.journalnode.http-address=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.xattrs.enabled=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.httpfs.hosts=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.shared.file.descriptor.paths=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.taskscheduler=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.speculative.speculativecap=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.store-class=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.am.liveness-monitor.expiry-interval-ms=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.output.fileoutputformat.compress=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.user.home.dir.prefix=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.log.level=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = net.topology.node.switch.mapping.impl=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.replication.considerLoad=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.fs-limits.min-block-size=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.swift.impl=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.audit.loggers=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.max.split.locations=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.address=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.counters.max=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.fetch.retry.enabled=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.block.write.retries=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.nm.liveness-monitor.interval-ms=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.short.circuit.shared.memory.watcher.interrupt.check.ms=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.map.index.interval=0A= 2015-02-21 19:01:19,849 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapred.child.java.opts=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.local.dir.minspacestart=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.client.progressmonitor.pollinterval=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.https.keystore.resource=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.profile.map.params=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = rpc.engine.org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.tasktracker.maxblacklists=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.queuename=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.localizer.address=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.mapfile.bloom.error.rate=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.delete.thread-count=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.split.metainfo.maxsize=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.maximum-allocation-vcores=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapred.mapper.new-api=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.tcpnodelay=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.dir=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.https.port=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.resource.mb=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.dns.nameserver=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.slow.io.warning.threshold.ms=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.reducer.preempt.delay.sec=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.output.compress.codec=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.accesstime.precision=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.log.level=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.connection.maximum=0A= 2015-02-21 19:01:19,850 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.seqfile.compress.blocksize=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.taskcontroller=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.groups.cache.secs=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.cache.revocation.timeout.ms=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.context=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.hive.groups=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.input.lineinputformat.linespermap=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.end-notification.max.attempts=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.webapp.address=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.submithostname=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.recovery.enable=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.expire.trackers.interval=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.flume.hosts=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.hdfs.hosts=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.webapp.address=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.kms.client.encrypted.key.cache.num.refill.threads=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.health-checker.interval-ms=0A= 2015-02-21 19:01:19,855 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.loadedjobs.cache.size=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.history-writer.multi-threaded-dispatcher.pool-size=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.authorization=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.mapred.groups=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.map.output.collector.class=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.am.max-attempts=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.ftp.host=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.attempts.maximum=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.servicerpc-address=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.ifile.readahead=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.scheduler.monitor.enable=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.zk-retry-interval-ms=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = rpc.engine.org.apache.hadoop.ipc.ProtocolMetaInfoPB=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.zookeeper.session-timeout.ms=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.taskmemorymanager.monitoringinterval=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.parallelcopies=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.mmap.retry.timeout.ms=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.skip.maxrecords=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.output.value.class=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.classloader.system.classes=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.avoid.read.stale.datanode=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.https.enable=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.webapp.https.address=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.read.timeout=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.list.encryption.zones.num.responses=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.remote-app-log-dir-suffix=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.output.fileoutputformat.compress.codec=0A= 2015-02-21 19:01:19,856 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.instrumentation=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.blockreport.intervalMsec=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.connect.retry.interval=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.speculative=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.keytab=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.datestring.cache.size=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.balance.bandwidthPerSec=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = file.blocksize=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.admin.address=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.cpu.vcores=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.container-monitor.procfs-tree.smaps-based-rss.enabled=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.configuration.provider-class=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.resource-tracker.address=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.local.dir.minspacekill=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.staging.root.dir=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.retiredjobs.cache.size=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.connect.max.retries.on.timeouts=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.zookeeper.acl=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.crypto.codec.classes.aes.ctr.nopadding=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.local-dirs=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.app-submission.cross-platform=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.connect.timeout=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.block.access.key.update.interval=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = rpc.metrics.quantile.enable=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.block.access.token.lifetime=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.end-notification.retry.attempts=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.fs-limits.max-xattrs-per-inode=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.system.dir=0A= 2015-02-21 19:01:19,857 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.file-block-storage-locations.timeout.millis=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.admin-env=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.jobhistory.block.size=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.log-aggregation.retain-seconds=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.indexcache.mb=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.handler-thread-count=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.checkpoint.check.period=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.hostname=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.block.write.replace-datanode-on-failure.enable=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = net.topology.impl=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.directoryscan.interval=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.multipart.purge.age=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.java.secure.random.algorithm=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.container-monitor.interval-ms=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.default.chunk.view.size=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.multipart.threshold=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.speculative.slownodethreshold=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.reduce.slowstart.completedmaps=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.HTTP.groups=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapred.reducer.new-api=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.instrumentation.requires.admin=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.compression.codec.bzip2.library=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.authentication.signature.secret.file=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.safemode.min.datanodes=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.localizer.cache.target-size-mb=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.input.fileinputformat.inputdir=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.maxattempts=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.https.address=0A= 2015-02-21 19:01:19,858 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3native.replication=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.inotify.max.events.per.rpc=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.path.based.cache.retry.interval.ms=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.skip.proc.count.autoincr=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.cache.revocation.polling.ms=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.cleaner.interval-ms=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = file.replication=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.hdfs.configuration.version=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.flume.groups=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.idlethreshold=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.tmp.dir=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.store.class=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.address=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.restart.recover=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.cluster.local.dir=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.nodemanager-client-async.thread-pool-max-size=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.ipc.serializer.type=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.decommission.nodes.per.interval=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.resource.cpu-vcores=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.reject-unresolved-dn-topology-mapping=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.delegation.key.update-interval=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3.buffer.dir=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.read.shortcircuit.streams.cache.expiry.ms=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.support.allow.format=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.remote-app-log-dir=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.compression.codecs=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.memory.mb=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.edit.log.autoroll.multiplier.threshold=0A= 2015-02-21 19:01:19,859 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.work.around.non.threadsafe.getpwuid=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.profile.reduce.params=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.ha.automatic-failover.enabled=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.edits.noeditlogchannelflush=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.stale.datanode.interval=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.transfer.buffer.size=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.persist.jobstatus.active=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.logging.level=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.log-dirs=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.health-monitor.sleep-after-disconnect.ms=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.fs.state-store.uri=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.checkpoint.edits.dir=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.keytab=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.rpc.socket.factory.class.default=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.http.address=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.profile=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.move.interval-ms=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.fair.sizebasedweight=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.edits.dir=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.kms.client.encrypted.key.cache.size=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.dispatcher.exit-on-error=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.fuse.timer.period=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.http.policy=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.intermediate-done-dir=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.skip.proc.count.autoincr=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.AbstractFileSystem.viewfs.impl=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.speculative.slowtaskthreshold=0A= 2015-02-21 19:01:19,860 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled=0A= 2015-02-21 19:01:19,867 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3native.stream-buffer-size=0A= 2015-02-21 19:01:19,867 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.delete.debug-delay-sec=0A= 2015-02-21 19:01:19,867 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.secondary.namenode.kerberos.internal.spnego.principal=0A= 2015-02-21 19:01:19,867 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.available-space-volume-choosing-policy.balanced-space-thresh= old=0A= 2015-02-21 19:01:19,867 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3n.multipart.uploads.block.size=0A= 2015-02-21 19:01:19,867 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.safemode.threshold-pct=0A= 2015-02-21 19:01:19,867 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.ifile.readahead.bytes=0A= 2015-02-21 19:01:19,867 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.maximum-allocation-mb=0A= 2015-02-21 19:01:19,867 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ipc.client.fallback-to-simple-auth-allowed=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.har.impl.disable.cache=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.leveldb-timeline-store.read-cache-size=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.hostname=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3native.bytes-per-checksum=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.committer.setup.cleanup.needed=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.paging.maximum=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.nodemanager-connect.retry-interval-ms=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.log-aggregation.compression-type=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.job.committer.commit-window=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.authentication.type=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.failover.sleep.base.millis=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.submithostaddress=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.vmem-check-enabled=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.jetty.logs.serve.aliases=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.failover-controller.graceful-fence.rpc-timeout.ms=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.input.buffer.percent=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.max.transfer.threads=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.merge.inmem.threshold=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.io.sort.mb=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.acls.enabled=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.kms.client.authentication.retry-count=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.client.application-client-protocol.poll-interval-ms=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.handler.count=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.connect.max-wait.ms=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.retrycache.heap.percent=0A= 2015-02-21 19:01:19,868 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.enabled=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.ssl.client.conf=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.container.liveness-monitor.interval-ms=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.vmem-pmem-ratio=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.client.completion.pollinterval=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.client.max-retries=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.ssl.enabled=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.client.resolve.remote.symlinks=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.AbstractFileSystem.hdfs.impl=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.admin.user.env=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.java.opts=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.client.genericoptionsparser.used=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.reduce.tasks.maximum=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.java.opts=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.hostname=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.input.buffer.percent=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.fair.assignmultiple=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.multipart.purge=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.command-opts=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.invalidate.work.pct.per.iteration=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.bytes-per-checksum=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.oozie.groups=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.webapp.https.address=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.replication=0A= 2015-02-21 19:01:19,869 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.block.id.layout.upgrade.threads=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.ssl.file.buffer.size=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.list.cache.directives.num.responses=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.permissions.enabled=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.oozie.hosts=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.maxtasks.perjob=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.use.datanode.hostname=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.userlog.limit.kb=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.fs-limits.max-directory-items=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.kms.client.encrypted.key.cache.low-watermark=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.buffer.dir=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3.client-write-packet-size=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.user.group.static.mapping.overrides=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.max.threads=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.failover.sleep.max.millis=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.maps=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.fs-limits.max-component-length=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.root.logger=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.ssl.enabled.protocols=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3.blocksize=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.output.compress=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.generic-application-history.fs-history-store.uri=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.edits.journal-plugin.qjournal=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.datanode.registration.ip-hostname-check=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.pmem-check-enabled=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.short.circuit.replica.stale.threshold.ms=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.https.need-auth=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.minimum-allocation-mb=0A= 2015-02-21 19:01:19,870 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.hive.hosts=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.max-age-ms=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ftp.replication=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.secondary.https-address=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.blockreport.split.threshold=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.input.fileinputformat.split.minsize=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3n.block.size=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.token.tracking.ids.enabled=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.ipc.rpc.class=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.num.extra.edits.retained=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.localizer.cache.cleanup.interval-ms=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.http.staticuser.user=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.move.thread-count=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.multipart.size=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.jvm.numtasks=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.profile.maps=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.resourcemanager.connect.wait.secs=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.max.locked.memory=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.cachereport.intervalMsec=0A= 2015-02-21 19:01:19,871 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.port=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.nodemanager.minimum.version=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.connection-keep-alive.timeout=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.merge.percent=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobtracker.http.address=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.skip.start.attempts=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.connect.retry-interval.ms=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.minimum-allocation-vcores=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.task.io.sort.factor=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.checkpoint.dir=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = nfs.exports.allowed.hosts=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = tfile.fs.input.buffer.size=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3.block.size=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = tfile.io.chunk.size=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3n.multipart.copy.block.size=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.serializations=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.max-completed-applications=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.jobhistory.principal=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.output.fileoutputformat.outputdir=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.ha.automatic-failover.zk-base-path=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.reduce.shuffle.fetch.retry.interval-ms=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.end-notification.retry.interval=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.backup.address=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3n.multipart.uploads.enabled=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.seqfile.sorter.recordlimit=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.block.access.token.enable=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = s3native.client-write-packet-size=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.fs-limits.max-xattr-size=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ftp.bytes-per-checksum=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.security.group.mapping=0A= 2015-02-21 19:01:19,872 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.domain.socket.data.traffic=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.read.shortcircuit.streams.cache.size=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3a.connection.timeout=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.end-notification.max.retry.interval=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.acl.enable=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nm.liveness-monitor.expiry-interval-ms=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.application.classpath=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.root.logger=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.input.fileinputformat.list-status.num-threads=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.mmap.cache.size=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.map.tasks.maximum=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.scheduler.fair.user-as-default-queue=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.ttl-enable=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.nodemanager.linux-container-executor.resources-handler.class=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.max.objects=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.state-store.max-completed-applications=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.delegation.token.max-lifetime=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.classloader=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.hdfs-servers=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.application.classpath=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.hdfs-blocks-metadata.enabled=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.dns.nameserver=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.readahead.bytes=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.ubertask.maxreduces=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.image.compress=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.shuffle.ssl.enabled=0A= 2015-02-21 19:01:19,873 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.log-aggregation-enable=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.report.address=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.tasktracker.http.threads=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.stream-buffer-size=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = tfile.fs.output.buffer.size=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.permissions.umask-mode=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.client.datanode-restart.timeout=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.resourcemanager.am.max-attempts=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = ha.failover-controller.graceful-fence.connection.retries=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.hdfs.groups=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.datanode.drop.cache.behind.writes=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.application.attempt.id=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.map.output.value.class=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.proxyuser.HTTP.hosts=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = hadoop.common.configuration.version=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.ubertask.enable=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = yarn.app.mapreduce.am.resource.cpu-vcores=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = dfs.namenode.replication.work.multiplier.per.iteration=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.job.acl-modify-job=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = io.seqfile.local.dir=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = fs.s3.sleepTimeSeconds=0A= 2015-02-21 19:01:19,874 DEBUG [eventHandlingThread] = org.apache.hadoop.conf.Configuration: Handling deprecation for = mapreduce.client.output.filter=0A= 2015-02-21 19:01:19,924 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.rm.ContainerRequestEvent.EventType: = CONTAINER_REQ=0A= 2015-02-21 19:01:19,925 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:19,925 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.rm.ContainerRequestEvent.EventType: = CONTAINER_REQ=0A= 2015-02-21 19:01:19,925 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:19,925 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.rm.ContainerRequestEvent.EventType: = CONTAINER_REQ=0A= 2015-02-21 19:01:19,925 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:19,925 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.rm.ContainerRequestEvent.EventType: = CONTAINER_REQ=0A= 2015-02-21 19:01:19,925 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:19,925 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.rm.ContainerRequestEvent.EventType: = CONTAINER_REQ=0A= 2015-02-21 19:01:19,925 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:19,925 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.rm.ContainerRequestEvent.EventType: = CONTAINER_REQ=0A= 2015-02-21 19:01:19,925 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:19,925 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.rm.ContainerRequestEvent.EventType: = CONTAINER_REQ=0A= 2015-02-21 19:01:19,925 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:19,925 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.rm.ContainerRequestEvent.EventType: = CONTAINER_REQ=0A= 2015-02-21 19:01:19,925 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:19,926 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.rm.ContainerRequestEvent.EventType: = CONTAINER_REQ=0A= 2015-02-21 19:01:19,927 INFO [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = mapResourceRequest:=0A= 2015-02-21 19:01:19,927 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added = attempt req to host hadoop0.rdpratti.com=0A= 2015-02-21 19:01:19,927 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added = attempt req to rack /default=0A= 2015-02-21 19:01:19,928 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Added = priority=3D20=0A= 2015-02-21 19:01:19,930 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = NORMALIZED_RESOURCE=0A= 2015-02-21 19:01:19,938 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3Dhadoop0.rdpratti.com numContainers=3D1 #asks=3D1=0A= 2015-02-21 19:01:19,938 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3D/default numContainers=3D1 #asks=3D2=0A= 2015-02-21 19:01:19,938 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D20 resourceName=3D* = numContainers=3D1 #asks=3D3=0A= 2015-02-21 19:01:19,944 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added = attempt req to host hadoop0.rdpratti.com=0A= 2015-02-21 19:01:19,945 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added = attempt req to rack /default=0A= 2015-02-21 19:01:19,945 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3Dhadoop0.rdpratti.com numContainers=3D2 #asks=3D3=0A= 2015-02-21 19:01:19,945 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3D/default numContainers=3D2 #asks=3D3=0A= 2015-02-21 19:01:19,945 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D20 resourceName=3D* = numContainers=3D2 #asks=3D3=0A= 2015-02-21 19:01:19,945 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added = attempt req to host hadoop0.rdpratti.com=0A= 2015-02-21 19:01:19,945 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added = attempt req to rack /default=0A= 2015-02-21 19:01:19,945 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3Dhadoop0.rdpratti.com numContainers=3D3 #asks=3D3=0A= 2015-02-21 19:01:19,945 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3D/default numContainers=3D3 #asks=3D3=0A= 2015-02-21 19:01:19,945 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D20 resourceName=3D* = numContainers=3D3 #asks=3D3=0A= 2015-02-21 19:01:19,946 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added = attempt req to host hadoop3.rdpratti.com=0A= 2015-02-21 19:01:19,946 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added = attempt req to rack /default=0A= 2015-02-21 19:01:19,946 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3Dhadoop3.rdpratti.com numContainers=3D1 #asks=3D4=0A= 2015-02-21 19:01:19,946 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3D/default numContainers=3D4 #asks=3D4=0A= 2015-02-21 19:01:19,946 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D20 resourceName=3D* = numContainers=3D4 #asks=3D4=0A= 2015-02-21 19:01:19,946 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added = attempt req to host hadoop1.rdpratti.com=0A= 2015-02-21 19:01:19,946 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added = attempt req to rack /default=0A= 2015-02-21 19:01:19,946 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3Dhadoop1.rdpratti.com numContainers=3D1 #asks=3D5=0A= 2015-02-21 19:01:19,946 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3D/default numContainers=3D5 #asks=3D5=0A= 2015-02-21 19:01:19,946 DEBUG [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D20 resourceName=3D* = numContainers=3D5 #asks=3D5=0A= 2015-02-21 19:01:19,947 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = NORMALIZED_RESOURCE=0A= 2015-02-21 19:01:19,947 INFO [Thread-50] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = reduceResourceRequest:=0A= 2015-02-21 19:01:20,041 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D0, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1_conf.xml, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D0=0A= 2015-02-21 19:01:20,218 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk packet full = seqno=3D0, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1_conf.xml, bytesCurBlock=3D65024, blockSize=3D134217728, = appendChunk=3Dfalse=0A= 2015-02-21 19:01:20,219 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 0=0A= 2015-02-21 19:01:20,225 DEBUG [Thread-59] = org.apache.hadoop.hdfs.DFSClient: Allocating new block=0A= 2015-02-21 19:01:20,230 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: computePacketChunkSize: = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1_conf.xml, chunkSize=3D516, chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 19:01:20,230 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D1, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1_conf.xml, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D65024=0A= 2015-02-21 19:01:20,246 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #14=0A= 2015-02-21 19:01:20,253 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #14=0A= 2015-02-21 19:01:20,254 DEBUG [Thread-59] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: addBlock took 9ms=0A= 2015-02-21 19:01:20,257 DEBUG [Thread-59] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.253:50010=0A= 2015-02-21 19:01:20,257 DEBUG [Thread-59] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.250:50010=0A= 2015-02-21 19:01:20,257 DEBUG [Thread-59] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.252:50010=0A= 2015-02-21 19:01:20,257 DEBUG [Thread-59] = org.apache.hadoop.hdfs.DFSClient: Connecting to datanode = 192.168.2.253:50010=0A= 2015-02-21 19:01:20,261 DEBUG [Thread-59] = org.apache.hadoop.hdfs.DFSClient: Send buf size 124928=0A= 2015-02-21 19:01:20,261 DEBUG [Thread-59] = org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient:= SASL client skipping handshake in unsecured configuration for addr =3D = /192.168.2.253, datanodeId =3D 192.168.2.253:50010=0A= 2015-02-21 19:01:20,298 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 1=0A= 2015-02-21 19:01:20,298 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 2=0A= 2015-02-21 19:01:20,298 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 2=0A= 2015-02-21 19:01:20,356 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1_c= onf.xml block = BP-268700609-192.168.2.253-1419532004456:blk_1073754564_13740] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754564_13740 sending = packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false = lastByteOffsetInBlock: 65024=0A= 2015-02-21 19:01:20,358 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1_c= onf.xml block = BP-268700609-192.168.2.253-1419532004456:blk_1073754564_13740] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754564_13740 sending = packet packet seqno:1 offsetInBlock:65024 lastPacketInBlock:false = lastByteOffsetInBlock: 108018=0A= 2015-02-21 19:01:20,387 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754564_13740] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 4430158=0A= 2015-02-21 19:01:20,387 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754564_13740] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 4538633=0A= 2015-02-21 19:01:20,387 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1_c= onf.xml block = BP-268700609-192.168.2.253-1419532004456:blk_1073754564_13740] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754564_13740 sending = packet packet seqno:2 offsetInBlock:108018 lastPacketInBlock:true = lastByteOffsetInBlock: 108018=0A= 2015-02-21 19:01:20,395 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754564_13740] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 2 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 3971642=0A= 2015-02-21 19:01:20,395 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1_c= onf.xml block = BP-268700609-192.168.2.253-1419532004456:blk_1073754564_13740] = org.apache.hadoop.hdfs.DFSClient: Closing old block = BP-268700609-192.168.2.253-1419532004456:blk_1073754564_13740=0A= 2015-02-21 19:01:20,398 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #15=0A= 2015-02-21 19:01:20,409 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #15=0A= 2015-02-21 19:01:20,409 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: complete took 11ms=0A= 2015-02-21 19:01:20,414 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:20,430 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler AM_STARTED=0A= 2015-02-21 19:01:20,430 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:20,431 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002.sum= mary: masked=3Drw-r--r--=0A= 2015-02-21 19:01:20,432 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #16=0A= 2015-02-21 19:01:20,442 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #16=0A= 2015-02-21 19:01:20,442 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: create took 11ms=0A= 2015-02-21 19:01:20,442 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: computePacketChunkSize: = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02.summary, chunkSize=3D516, chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 19:01:20,444 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D0, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02.summary, packetSize=3D65532, chunksPerPacket=3D127, bytesCurBlock=3D0=0A= 2015-02-21 19:01:20,444 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 0=0A= 2015-02-21 19:01:20,444 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 1=0A= 2015-02-21 19:01:20,444 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 1=0A= 2015-02-21 19:01:20,444 DEBUG [Thread-62] = org.apache.hadoop.hdfs.DFSClient: Allocating new block=0A= 2015-02-21 19:01:20,444 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #17=0A= 2015-02-21 19:01:20,448 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before = Scheduling: PendingReds:4 ScheduledMaps:5 ScheduledReds:0 AssignedMaps:0 = AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 = HostLocal:0 RackLocal:0=0A= 2015-02-21 19:01:20,453 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #17=0A= 2015-02-21 19:01:20,453 DEBUG [Thread-62] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: addBlock took 9ms=0A= 2015-02-21 19:01:20,454 DEBUG [Thread-62] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.253:50010=0A= 2015-02-21 19:01:20,454 DEBUG [Thread-62] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.252:50010=0A= 2015-02-21 19:01:20,454 DEBUG [Thread-62] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.250:50010=0A= 2015-02-21 19:01:20,454 DEBUG [Thread-62] = org.apache.hadoop.hdfs.DFSClient: Connecting to datanode = 192.168.2.253:50010=0A= 2015-02-21 19:01:20,455 DEBUG [Thread-62] = org.apache.hadoop.hdfs.DFSClient: Send buf size 124928=0A= 2015-02-21 19:01:20,455 DEBUG [Thread-62] = org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient:= SASL client skipping handshake in unsecured configuration for addr =3D = /192.168.2.253, datanodeId =3D 192.168.2.253:50010=0A= 2015-02-21 19:01:20,462 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002.sum= mary block = BP-268700609-192.168.2.253-1419532004456:blk_1073754565_13741] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754565_13741 sending = packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false = lastByteOffsetInBlock: 301=0A= 2015-02-21 19:01:20,467 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #18=0A= 2015-02-21 19:01:20,467 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754565_13741] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 2162806=0A= 2015-02-21 19:01:20,470 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002.sum= mary block = BP-268700609-192.168.2.253-1419532004456:blk_1073754565_13741] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754565_13741 sending = packet packet seqno:1 offsetInBlock:301 lastPacketInBlock:true = lastByteOffsetInBlock: 301=0A= 2015-02-21 19:01:20,477 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754565_13741] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 4028867=0A= 2015-02-21 19:01:20,478 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #19=0A= 2015-02-21 19:01:20,486 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #19=0A= 2015-02-21 19:01:20,490 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: complete took 12ms=0A= 2015-02-21 19:01:20,490 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler JOB_SUBMITTED=0A= 2015-02-21 19:01:20,490 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:20,490 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler JOB_QUEUE_CHANGED=0A= 2015-02-21 19:01:20,490 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:20,490 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler JOB_INITED=0A= 2015-02-21 19:01:20,490 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:20,490 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler JOB_INFO_CHANGED=0A= 2015-02-21 19:01:20,491 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:20,491 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_STARTED=0A= 2015-02-21 19:01:20,491 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:20,491 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_STARTED=0A= 2015-02-21 19:01:20,491 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:20,491 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_STARTED=0A= 2015-02-21 19:01:20,491 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:20,491 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_STARTED=0A= 2015-02-21 19:01:20,491 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:20,491 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_STARTED=0A= 2015-02-21 19:01:20,491 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:20,491 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_STARTED=0A= 2015-02-21 19:01:20,491 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:20,492 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_STARTED=0A= 2015-02-21 19:01:20,492 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:20,492 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_STARTED=0A= 2015-02-21 19:01:20,492 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:20,492 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_STARTED=0A= 2015-02-21 19:01:20,492 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler NORMALIZED_RESOURCE=0A= 2015-02-21 19:01:20,492 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler NORMALIZED_RESOURCE=0A= 2015-02-21 19:01:20,514 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #18=0A= 2015-02-21 19:01:20,514 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 48ms=0A= 2015-02-21 19:01:20,538 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = getResources() for application_1424550134651_0002: ask=3D5 release=3D 0 = newContainers=3D0 finishedContainers=3D0 resourcelimit=3D knownNMs=3D4=0A= 2015-02-21 19:01:20,538 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = headroom=3D=0A= 2015-02-21 19:01:20,540 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:20,540 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold not met. completedMapsForReduceSlowstart 4=0A= 2015-02-21 19:01:20,821 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #68=0A= 2015-02-21 19:01:20,822 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#68 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:20,822 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:20,822 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:20,823 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#68 Retry#0=0A= 2015-02-21 19:01:20,823 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#68 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:20,829 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #69=0A= 2015-02-21 19:01:20,829 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#69 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:20,829 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:20,839 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 3 procesingTime=3D 7=0A= 2015-02-21 19:01:20,839 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#69 Retry#0=0A= 2015-02-21 19:01:20,840 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#69 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:20,845 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #70=0A= 2015-02-21 19:01:20,845 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#70 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:20,845 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:20,846 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:20,846 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#70 Retry#0=0A= 2015-02-21 19:01:20,846 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#70 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:20,847 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #71=0A= 2015-02-21 19:01:20,847 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#71 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:20,847 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:20,848 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:20,848 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#71 Retry#0=0A= 2015-02-21 19:01:20,848 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#71 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:21,540 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #20=0A= 2015-02-21 19:01:21,546 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #20=0A= 2015-02-21 19:01:21,546 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 6ms=0A= 2015-02-21 19:01:21,553 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = headroom=3D=0A= 2015-02-21 19:01:21,553 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received new = Container :Container: [ContainerId: = container_1424550134651_0002_01_000002, NodeId: = hadoop3.rdpratti.com:8041, NodeHttpAddress: hadoop3.rdpratti.com:8042, = Resource: , Priority: 20, Token: Token { kind: = ContainerToken, service: 192.168.2.252:8041 }, ]=0A= 2015-02-21 19:01:21,553 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received new = Container :Container: [ContainerId: = container_1424550134651_0002_01_000003, NodeId: = hadoop1.rdpratti.com:8041, NodeHttpAddress: hadoop1.rdpratti.com:8042, = Resource: , Priority: 20, Token: Token { kind: = ContainerToken, service: 192.168.2.250:8041 }, ]=0A= 2015-02-21 19:01:21,553 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got = allocated containers 2=0A= 2015-02-21 19:01:21,554 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning = container container_1424550134651_0002_01_000002 with priority 20 to NM = hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:21,554 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning = container container_1424550134651_0002_01_000003 with priority 20 to NM = hadoop1.rdpratti.com:8041=0A= 2015-02-21 19:01:21,554 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Host matched = to the request list hadoop3.rdpratti.com=0A= 2015-02-21 19:01:21,554 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3Dhadoop3.rdpratti.com numContainers=3D1 #asks=3D0=0A= 2015-02-21 19:01:21,554 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3Dhadoop3.rdpratti.com numContainers=3D0 #asks=3D1=0A= 2015-02-21 19:01:21,554 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3D/default numContainers=3D5 #asks=3D1=0A= 2015-02-21 19:01:21,554 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3D/default numContainers=3D4 #asks=3D2=0A= 2015-02-21 19:01:21,554 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D20 resourceName=3D* = numContainers=3D5 #asks=3D2=0A= 2015-02-21 19:01:21,554 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D20 resourceName=3D* = numContainers=3D4 #asks=3D3=0A= 2015-02-21 19:01:21,555 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerAssigned= Event.EventType: TA_ASSIGNED=0A= 2015-02-21 19:01:21,555 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000003_0 of type TA_ASSIGNED=0A= 2015-02-21 19:01:21,555 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container container_1424550134651_0002_01_000002 to = attempt_1424550134651_0002_m_000003_0=0A= 2015-02-21 19:01:21,555 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container (Container: [ContainerId: = container_1424550134651_0002_01_000002, NodeId: = hadoop3.rdpratti.com:8041, NodeHttpAddress: hadoop3.rdpratti.com:8042, = Resource: , Priority: 20, Token: Token { kind: = ContainerToken, service: 192.168.2.252:8041 }, ]) to task = attempt_1424550134651_0002_m_000003_0 on node hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:21,556 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = based on host match hadoop3.rdpratti.com=0A= 2015-02-21 19:01:21,556 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Host matched = to the request list hadoop1.rdpratti.com=0A= 2015-02-21 19:01:21,556 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3Dhadoop1.rdpratti.com numContainers=3D1 #asks=3D3=0A= 2015-02-21 19:01:21,556 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3Dhadoop1.rdpratti.com numContainers=3D0 #asks=3D4=0A= 2015-02-21 19:01:21,556 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3D/default numContainers=3D4 #asks=3D4=0A= 2015-02-21 19:01:21,556 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3D/default numContainers=3D3 #asks=3D4=0A= 2015-02-21 19:01:21,556 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D20 resourceName=3D* = numContainers=3D4 #asks=3D4=0A= 2015-02-21 19:01:21,557 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D20 resourceName=3D* = numContainers=3D3 #asks=3D4=0A= 2015-02-21 19:01:21,557 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container container_1424550134651_0002_01_000003 to = attempt_1424550134651_0002_m_000004_0=0A= 2015-02-21 19:01:21,557 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container (Container: [ContainerId: = container_1424550134651_0002_01_000003, NodeId: = hadoop1.rdpratti.com:8041, NodeHttpAddress: hadoop1.rdpratti.com:8042, = Resource: , Priority: 20, Token: Token { kind: = ContainerToken, service: 192.168.2.250:8041 }, ]) to task = attempt_1424550134651_0002_m_000004_0 on node hadoop1.rdpratti.com:8041=0A= 2015-02-21 19:01:21,557 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = based on host match hadoop1.rdpratti.com=0A= 2015-02-21 19:01:21,557 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:21,557 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold not met. completedMapsForReduceSlowstart 4=0A= 2015-02-21 19:01:21,557 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After = Scheduling: PendingReds:4 ScheduledMaps:3 ScheduledReds:0 AssignedMaps:2 = AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0 = HostLocal:2 RackLocal:0=0A= 2015-02-21 19:01:21,566 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapred.SortedRanges: currentIndex 0 0:0=0A= 2015-02-21 19:01:21,601 INFO [AsyncDispatcher event handler] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop3.rdpratti.com = to /default=0A= 2015-02-21 19:01:21,603 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #21=0A= 2015-02-21 19:01:21,604 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #21=0A= 2015-02-21 19:01:21,604 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 2ms=0A= 2015-02-21 19:01:21,604 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #22=0A= 2015-02-21 19:01:21,606 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #22=0A= 2015-02-21 19:01:21,606 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 2ms=0A= 2015-02-21 19:01:21,618 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar = file on the remote FS is = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job.jar=0A= 2015-02-21 19:01:21,618 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #23=0A= 2015-02-21 19:01:21,619 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #23=0A= 2015-02-21 19:01:21,619 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:21,620 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #24=0A= 2015-02-21 19:01:21,621 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #24=0A= 2015-02-21 19:01:21,621 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:21,621 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The = job-conf file on the remote FS is = /user/cloudera/.staging/job_1424550134651_0002/job.xml=0A= 2015-02-21 19:01:21,622 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0 = tokens and #1 secret keys for NM use for launching container=0A= 2015-02-21 19:01:21,622 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of = containertokens_dob is 1=0A= 2015-02-21 19:01:21,623 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting = shuffle token in serviceData=0A= 2015-02-21 19:01:21,647 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000003_0 TaskAttempt Transitioned from = UNASSIGNED to ASSIGNED=0A= 2015-02-21 19:01:21,648 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:21,648 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:21,650 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerAssigned= Event.EventType: TA_ASSIGNED=0A= 2015-02-21 19:01:21,650 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000004_0 of type TA_ASSIGNED=0A= 2015-02-21 19:01:21,650 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapred.SortedRanges: currentIndex 0 0:0=0A= 2015-02-21 19:01:21,650 INFO [AsyncDispatcher event handler] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop1.rdpratti.com = to /default=0A= 2015-02-21 19:01:21,651 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000004_0 TaskAttempt Transitioned from = UNASSIGNED to ASSIGNED=0A= 2015-02-21 19:01:21,651 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:21,651 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:21,651 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerRemoteLaunchEvent.Ev= entType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000002 taskAttempt = attempt_1424550134651_0002_m_000003_0=0A= 2015-02-21 19:01:21,651 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:21,651 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerRemoteLaunchEvent.Ev= entType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000003 taskAttempt = attempt_1424550134651_0002_m_000004_0=0A= 2015-02-21 19:01:21,651 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:21,652 INFO [ContainerLauncher #0] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000002 taskAttempt = attempt_1424550134651_0002_m_000003_0=0A= 2015-02-21 19:01:21,652 INFO [ContainerLauncher #1] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000003 taskAttempt = attempt_1424550134651_0002_m_000004_0=0A= 2015-02-21 19:01:21,655 INFO [ContainerLauncher #0] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Launching attempt_1424550134651_0002_m_000003_0=0A= 2015-02-21 19:01:21,655 INFO [ContainerLauncher #1] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Launching attempt_1424550134651_0002_m_000004_0=0A= 2015-02-21 19:01:21,656 INFO [ContainerLauncher #0] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:21,658 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@1028ef8e)=0A= 2015-02-21 19:01:21,660 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:21,660 DEBUG [ContainerLauncher #0] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:21,664 DEBUG [ContainerLauncher #0] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:21,677 INFO [ContainerLauncher #1] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop1.rdpratti.com:8041=0A= 2015-02-21 19:01:21,678 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.250:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@396c36f8)=0A= 2015-02-21 19:01:21,678 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:21,678 DEBUG [ContainerLauncher #1] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:21,678 DEBUG [ContainerLauncher #1] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:21,701 DEBUG [ContainerLauncher #0] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:21,701 DEBUG [ContainerLauncher #0] = org.apache.hadoop.ipc.Client: Connecting to = hadoop3.rdpratti.com/192.168.2.252:8041=0A= 2015-02-21 19:01:21,701 DEBUG [ContainerLauncher #1] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:21,701 DEBUG [ContainerLauncher #1] = org.apache.hadoop.ipc.Client: Connecting to = hadoop1.rdpratti.com/192.168.2.250:8041=0A= 2015-02-21 19:01:21,701 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:21,701 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:21,702 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:21,702 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:21,732 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"JMv2N44ZwBAMO+4VkZA/aUUlh255xZ9fSutqsBXh\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:21,733 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@72be1= 8b7=0A= 2015-02-21 19:01:21,734 INFO [ContainerLauncher #0] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.252:8041. Current token is Kind: NMToken, Service: = 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@2bf0c8c1)=0A= 2015-02-21 19:01:21,734 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:21,734 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:21,734 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:21,734 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:21,734 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:21,735 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"JMv2N44= ZwBAMO+4VkZA/aUUlh255xZ9fSutqsBXh\",nc=3D00000001,cnonce=3D\"B6h7RbPM9qXw= SiD4y0+ZFr94bKUFjIkhN+yAGX0I\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3D9268af7318d08e2166887f7e44a024d3,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:21,739 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"/6g44PtFF7Q5AFXJb/Z/w8tl+HVEmGmcf/V2VBew\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:21,739 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@52465= b94=0A= 2015-02-21 19:01:21,739 INFO [ContainerLauncher #1] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.250:8041. Current token is Kind: NMToken, Service: = 192.168.2.250:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@9267bbf)=0A= 2015-02-21 19:01:21,739 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:21,739 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:21,740 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMS5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:21,740 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:21,740 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:21,740 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMS5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"/6g44Pt= FF7Q5AFXJb/Z/w8tl+HVEmGmcf/V2VBew\",nc=3D00000001,cnonce=3D\"NuR+kmJzl6iZ= /HAnmtk7s1eRHacnbVXtb6VboXJB\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3D998dac979206e2baff9d0192968a988c,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:21,753 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D1eb3f4eb7c8f1290cf32f102410d8399"=0A= =0A= 2015-02-21 19:01:21,754 DEBUG [ContainerLauncher #0] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:21,754 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 4=0A= 2015-02-21 19:01:21,755 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 sending #26=0A= 2015-02-21 19:01:21,758 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D22da360b063f29eb4a8a2dcdf7772099"=0A= =0A= 2015-02-21 19:01:21,758 DEBUG [ContainerLauncher #1] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:21,759 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 4=0A= 2015-02-21 19:01:21,760 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001 sending #25=0A= 2015-02-21 19:01:21,850 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #72=0A= 2015-02-21 19:01:21,850 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#72 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:21,850 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:21,850 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:21,850 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#72 Retry#0=0A= 2015-02-21 19:01:21,850 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#72 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:21,851 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #73=0A= 2015-02-21 19:01:21,851 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#73 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:21,851 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:21,852 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:21,852 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#73 Retry#0=0A= 2015-02-21 19:01:21,852 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#73 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:21,853 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #74=0A= 2015-02-21 19:01:21,853 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#74 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:21,853 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:21,854 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:21,854 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#74 Retry#0=0A= 2015-02-21 19:01:21,854 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#74 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:21,943 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 got value #26=0A= 2015-02-21 19:01:21,943 DEBUG [ContainerLauncher #0] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: startContainers took 242ms=0A= 2015-02-21 19:01:21,944 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:21,944 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 3=0A= 2015-02-21 19:01:21,949 INFO [ContainerLauncher #0] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Shuffle port returned by ContainerManager for = attempt_1424550134651_0002_m_000003_0 : 13562=0A= 2015-02-21 19:01:21,950 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerLaunched= Event.EventType: TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:21,950 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000003_0 of type TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:21,951 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = TaskAttempt: [attempt_1424550134651_0002_m_000003_0] using containerId: = [container_1424550134651_0002_01_000002 on NM: = [hadoop3.rdpratti.com:8041]=0A= 2015-02-21 19:01:21,954 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000003_0 TaskAttempt Transitioned from = ASSIGNED to RUNNING=0A= 2015-02-21 19:01:21,954 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:21,954 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:21,954 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = MAP_ATTEMPT_STARTED=0A= 2015-02-21 19:01:21,954 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_START=0A= 2015-02-21 19:01:21,954 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:21,954 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:21,954 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_m_000003 of type T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:21,954 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_m_000003 Task Transitioned from SCHEDULED to = RUNNING=0A= 2015-02-21 19:01:21,955 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler MAP_ATTEMPT_STARTED=0A= 2015-02-21 19:01:22,116 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001 got value #25=0A= 2015-02-21 19:01:22,117 DEBUG [ContainerLauncher #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: startContainers took 416ms=0A= 2015-02-21 19:01:22,117 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:22,117 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 2=0A= 2015-02-21 19:01:22,117 INFO [ContainerLauncher #1] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Shuffle port returned by ContainerManager for = attempt_1424550134651_0002_m_000004_0 : 13562=0A= 2015-02-21 19:01:22,117 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerLaunched= Event.EventType: TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:22,117 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000004_0 of type TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:22,117 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = TaskAttempt: [attempt_1424550134651_0002_m_000004_0] using containerId: = [container_1424550134651_0002_01_000003 on NM: = [hadoop1.rdpratti.com:8041]=0A= 2015-02-21 19:01:22,118 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000004_0 TaskAttempt Transitioned from = ASSIGNED to RUNNING=0A= 2015-02-21 19:01:22,118 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:22,118 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:22,118 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = MAP_ATTEMPT_STARTED=0A= 2015-02-21 19:01:22,118 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_START=0A= 2015-02-21 19:01:22,118 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:22,118 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:22,118 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_m_000004 of type T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:22,118 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_m_000004 Task Transitioned from SCHEDULED to = RUNNING=0A= 2015-02-21 19:01:22,118 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler MAP_ATTEMPT_STARTED=0A= 2015-02-21 19:01:22,558 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #27=0A= 2015-02-21 19:01:22,560 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #27=0A= 2015-02-21 19:01:22,561 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 3ms=0A= 2015-02-21 19:01:22,561 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = getResources() for application_1424550134651_0002: ask=3D4 release=3D 0 = newContainers=3D1 finishedContainers=3D0 resourcelimit=3D knownNMs=3D4=0A= 2015-02-21 19:01:22,561 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = headroom=3D=0A= 2015-02-21 19:01:22,561 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received new = Container :Container: [ContainerId: = container_1424550134651_0002_01_000004, NodeId: = hadoop2.rdpratti.com:8041, NodeHttpAddress: hadoop2.rdpratti.com:8042, = Resource: , Priority: 20, Token: Token { kind: = ContainerToken, service: 192.168.2.251:8041 }, ]=0A= 2015-02-21 19:01:22,561 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got = allocated containers 1=0A= 2015-02-21 19:01:22,561 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning = container container_1424550134651_0002_01_000004 with priority 20 to NM = hadoop2.rdpratti.com:8041=0A= 2015-02-21 19:01:22,583 INFO [RMCommunicator Allocator] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop2.rdpratti.com = to /default=0A= 2015-02-21 19:01:22,583 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3Dhadoop0.rdpratti.com numContainers=3D3 #asks=3D0=0A= 2015-02-21 19:01:22,583 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3Dhadoop0.rdpratti.com numContainers=3D2 #asks=3D1=0A= 2015-02-21 19:01:22,583 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3D/default numContainers=3D3 #asks=3D1=0A= 2015-02-21 19:01:22,583 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3D/default numContainers=3D2 #asks=3D2=0A= 2015-02-21 19:01:22,583 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D20 resourceName=3D* = numContainers=3D3 #asks=3D2=0A= 2015-02-21 19:01:22,583 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D20 resourceName=3D* = numContainers=3D2 #asks=3D3=0A= 2015-02-21 19:01:22,583 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerAssigned= Event.EventType: TA_ASSIGNED=0A= 2015-02-21 19:01:22,583 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container container_1424550134651_0002_01_000004 to = attempt_1424550134651_0002_m_000000_0=0A= 2015-02-21 19:01:22,583 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000000_0 of type TA_ASSIGNED=0A= 2015-02-21 19:01:22,583 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container (Container: [ContainerId: = container_1424550134651_0002_01_000004, NodeId: = hadoop2.rdpratti.com:8041, NodeHttpAddress: hadoop2.rdpratti.com:8042, = Resource: , Priority: 20, Token: Token { kind: = ContainerToken, service: 192.168.2.251:8041 }, ]) to task = attempt_1424550134651_0002_m_000000_0 on node hadoop2.rdpratti.com:8041=0A= 2015-02-21 19:01:22,583 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapred.SortedRanges: currentIndex 0 0:0=0A= 2015-02-21 19:01:22,583 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = based on rack match /default=0A= 2015-02-21 19:01:22,583 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:22,583 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold not met. completedMapsForReduceSlowstart 4=0A= 2015-02-21 19:01:22,583 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After = Scheduling: PendingReds:4 ScheduledMaps:2 ScheduledReds:0 AssignedMaps:3 = AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0 = HostLocal:2 RackLocal:1=0A= 2015-02-21 19:01:22,584 INFO [AsyncDispatcher event handler] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop2.rdpratti.com = to /default=0A= 2015-02-21 19:01:22,584 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000000_0 TaskAttempt Transitioned from = UNASSIGNED to ASSIGNED=0A= 2015-02-21 19:01:22,584 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:22,584 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:22,584 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerRemoteLaunchEvent.Ev= entType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000004 taskAttempt = attempt_1424550134651_0002_m_000000_0=0A= 2015-02-21 19:01:22,584 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:22,585 INFO [ContainerLauncher #2] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000004 taskAttempt = attempt_1424550134651_0002_m_000000_0=0A= 2015-02-21 19:01:22,585 INFO [ContainerLauncher #2] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Launching attempt_1424550134651_0002_m_000000_0=0A= 2015-02-21 19:01:22,585 INFO [ContainerLauncher #2] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop2.rdpratti.com:8041=0A= 2015-02-21 19:01:22,586 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.251:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@29b6857)=0A= 2015-02-21 19:01:22,586 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:22,586 DEBUG [ContainerLauncher #2] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:22,586 DEBUG [ContainerLauncher #2] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:22,587 DEBUG [ContainerLauncher #2] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:22,587 DEBUG [ContainerLauncher #2] = org.apache.hadoop.ipc.Client: Connecting to = hadoop2.rdpratti.com/192.168.2.251:8041=0A= 2015-02-21 19:01:22,587 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:22,597 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:22,630 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"4zII7zyO1NfqgBiA6tKIaM2gSpzxmRE2B/flVfLC\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:22,630 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@57792= 80f=0A= 2015-02-21 19:01:22,630 INFO [ContainerLauncher #2] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.251:8041. Current token is Kind: NMToken, Service: = 192.168.2.251:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@53351da6)=0A= 2015-02-21 19:01:22,630 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:22,630 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:22,630 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMi5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:22,631 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:22,631 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:22,631 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMi5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"4zII7zy= O1NfqgBiA6tKIaM2gSpzxmRE2B/flVfLC\",nc=3D00000001,cnonce=3D\"l5hjrn108JE0= wNqvvcmx+upi9L4/DUqF2/X8499m\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3D132cdbcfc0e3b3a6444b5d555606d17b,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:22,656 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D8b07d48fa6112b6ada6f991d82488d90"=0A= =0A= 2015-02-21 19:01:22,657 DEBUG [ContainerLauncher #2] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:22,657 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 3=0A= 2015-02-21 19:01:22,659 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001 sending #28=0A= 2015-02-21 19:01:22,855 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #75=0A= 2015-02-21 19:01:22,856 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#75 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:22,856 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:22,856 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 1 procesingTime=3D 0=0A= 2015-02-21 19:01:22,856 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#75 Retry#0=0A= 2015-02-21 19:01:22,856 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#75 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:22,857 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #76=0A= 2015-02-21 19:01:22,857 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#76 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:22,857 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:22,858 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:22,858 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#76 Retry#0=0A= 2015-02-21 19:01:22,858 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#76 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:22,859 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #77=0A= 2015-02-21 19:01:22,859 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#77 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:22,859 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:22,859 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:22,860 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#77 Retry#0=0A= 2015-02-21 19:01:22,860 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#77 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:23,010 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001 got value #28=0A= 2015-02-21 19:01:23,010 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:23,010 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 2=0A= 2015-02-21 19:01:23,010 DEBUG [ContainerLauncher #2] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: startContainers took 423ms=0A= 2015-02-21 19:01:23,011 INFO [ContainerLauncher #2] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Shuffle port returned by ContainerManager for = attempt_1424550134651_0002_m_000000_0 : 13562=0A= 2015-02-21 19:01:23,011 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerLaunched= Event.EventType: TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:23,011 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000000_0 of type TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:23,011 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = TaskAttempt: [attempt_1424550134651_0002_m_000000_0] using containerId: = [container_1424550134651_0002_01_000004 on NM: = [hadoop2.rdpratti.com:8041]=0A= 2015-02-21 19:01:23,011 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000000_0 TaskAttempt Transitioned from = ASSIGNED to RUNNING=0A= 2015-02-21 19:01:23,011 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:23,011 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:23,012 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = MAP_ATTEMPT_STARTED=0A= 2015-02-21 19:01:23,012 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_START=0A= 2015-02-21 19:01:23,012 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:23,012 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:23,012 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_m_000000 of type T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:23,012 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_m_000000 Task Transitioned from SCHEDULED to = RUNNING=0A= 2015-02-21 19:01:23,012 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler MAP_ATTEMPT_STARTED=0A= 2015-02-21 19:01:23,347 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: REQUEST = /ws/v1/mapreduce/jobs/job_1424550134651_0002 on = org.mortbay.jetty.HttpConnection@fc7afc9=0A= 2015-02-21 19:01:23,350 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = sessionManager=3Dorg.mortbay.jetty.servlet.HashSessionManager@4befbfaf=0A= 2015-02-21 19:01:23,350 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = session=3Dnull=0A= 2015-02-21 19:01:23,350 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = servlet=3Ddefault=0A= 2015-02-21 19:01:23,350 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = chain=3DNoCacheFilter->safety->AM_PROXY_FILTER->guice->default=0A= 2015-02-21 19:01:23,351 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: servlet = holder=3Ddefault=0A= 2015-02-21 19:01:23,351 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter NoCacheFilter=0A= 2015-02-21 19:01:23,351 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter safety=0A= 2015-02-21 19:01:23,351 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter AM_PROXY_FILTER=0A= 2015-02-21 19:01:23,351 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] = org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter: Remote = address for request is: 192.168.2.253=0A= 2015-02-21 19:01:23,352 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] = org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter: proxy = address is: 192.168.2.253=0A= 2015-02-21 19:01:23,353 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter guice=0A= 2015-02-21 19:01:23,391 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.server.impl.container.WebApplicationProviderImpl = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,419 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = getResource(META-INF/services/javax.ws.rs.ext.RuntimeDelegate)=3Djar:file= :/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jersey-server-1.9.= jar!/META-INF/services/javax.ws.rs.ext.RuntimeDelegate=0A= 2015-02-21 19:01:23,420 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.server.impl.provider.RuntimeDelegateImpl from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,425 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.header.LocaleProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,426 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.header.EntityTagProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,426 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.header.MediaTypeProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,427 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.header.CacheControlProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,428 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.header.NewCookieProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,428 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.header.CookieProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,428 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.header.URIProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,429 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.header.DateProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,429 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.header.StringProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,468 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.server.impl.model.method.dispatch.VoidVoidDispatchProvider= from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,469 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.server.impl.model.method.dispatch.HttpReqResDispatchProvid= er from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,470 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.server.impl.model.method.dispatch.MultipartFormDispatchPro= vider from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,470 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.server.impl.model.method.dispatch.FormDispatchProvider = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,470 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvi= der from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,473 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = interface javax.annotation.PostConstruct=0A= 2015-02-21 19:01:23,473 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = interface javax.annotation.PostConstruct from null=0A= 2015-02-21 19:01:23,482 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = interface javax.annotation.PreDestroy=0A= 2015-02-21 19:01:23,482 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = interface javax.annotation.PreDestroy from null=0A= 2015-02-21 19:01:23,497 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.xml.SAXParserContextProvider = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,497 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.xml.XMLStreamReaderContextProvider = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,498 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.xml.DocumentBuilderFactoryProvider = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,498 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.xml.TransformerFactoryProvider = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,525 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.api.container.filter.GZIPContentEncodingFilter from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,530 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.server.impl.container.filter.NormalizeFilter from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,549 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = getResource(META-INF/services/javax.xml.bind.JAXBContext)=3Djar:file:/opt= /cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jaxb-impl-2.2.3-1.jar!/= META-INF/services/javax.xml.bind.JAXBContext=0A= 2015-02-21 19:01:23,550 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.xml.bind.v2.ContextFactory from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,584 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #29=0A= 2015-02-21 19:01:23,586 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #29=0A= 2015-02-21 19:01:23,586 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:23,587 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = getResources() for application_1424550134651_0002: ask=3D3 release=3D 0 = newContainers=3D0 finishedContainers=3D0 resourcelimit=3D knownNMs=3D4=0A= 2015-02-21 19:01:23,587 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:23,587 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold not met. completedMapsForReduceSlowstart 4=0A= 2015-02-21 19:01:23,588 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.org.apache.xerces.internal.jaxp.datatype.DatatypeFactoryImpl = from null=0A= 2015-02-21 19:01:23,634 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.org.apache.xerces.internal.jaxp.datatype.DatatypeFactoryImpl = from null=0A= 2015-02-21 19:01:23,861 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #78=0A= 2015-02-21 19:01:23,861 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#78 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:23,862 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:23,862 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 1 procesingTime=3D 0=0A= 2015-02-21 19:01:23,862 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#78 Retry#0=0A= 2015-02-21 19:01:23,862 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#78 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:23,863 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #79=0A= 2015-02-21 19:01:23,863 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#79 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:23,863 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:23,864 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:23,864 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#79 Retry#0=0A= 2015-02-21 19:01:23,864 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#79 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:23,865 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #80=0A= 2015-02-21 19:01:23,866 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#80 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:23,866 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:23,866 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:23,867 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#80 Retry#0=0A= 2015-02-21 19:01:23,867 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#80 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:23,874 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.entity.StringProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,875 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.entity.ByteArrayProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,875 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.entity.FileProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,876 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.entity.InputStreamProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,876 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.entity.DataSourceProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,877 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.entity.RenderedImageProvider = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,877 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.entity.MimeMultipartProvider = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,878 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.entity.FormProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,878 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.FormMultivaluedMapProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,880 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.XMLRootElementProvider$App from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,880 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.XMLRootElementProvider$Text = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,880 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.XMLRootElementProvider$General = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,882 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.XMLJAXBElementProvider$App from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,882 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.XMLJAXBElementProvider$Text = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,883 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.XMLJAXBElementProvider$General = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,884 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.XMLListElementProvider$App from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,884 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.XMLListElementProvider$Text = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,885 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.XMLListElementProvider$General = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,885 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.entity.ReaderProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,885 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.entity.DocumentProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,886 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.SourceProvider$StreamSourceReade= r from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,886 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.SourceProvider$SAXSourceReader = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,887 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.SourceProvider$DOMSourceReader = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,887 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.entity.XMLRootObjectProvider$App = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,887 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.XMLRootObjectProvider$Text from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,888 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.XMLRootObjectProvider$General = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,888 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.entity.EntityHolderReader from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,890 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.json.impl.provider.entity.JSONRootElementProvider$App = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,890 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.json.impl.provider.entity.JSONRootElementProvider$General = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,891 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.json.impl.provider.entity.JSONJAXBElementProvider$App = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,891 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.json.impl.provider.entity.JSONJAXBElementProvider$General = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,892 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.json.impl.provider.entity.JSONListElementProvider$App = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,892 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.json.impl.provider.entity.JSONListElementProvider$General = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,893 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.json.impl.provider.entity.JSONArrayProvider$App = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,894 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.json.impl.provider.entity.JSONArrayProvider$General = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,894 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.json.impl.provider.entity.JSONObjectProvider$App = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,895 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.json.impl.provider.entity.JSONObjectProvider$General from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:23,896 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.json.impl.provider.entity.JacksonProviderProxy from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:24,084 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:24,084 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.core.impl.provider.entity.SourceProvider$SourceWriter = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:24,085 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.server.impl.template.ViewableMessageBodyWriter from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:24,087 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.jersey.json.impl.provider.entity.JSONWithPaddingProvider = from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:24,106 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.server.impl.model.parameter.multivalued.StringReaderProvid= ers$TypeFromStringEnum from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:24,108 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.server.impl.model.parameter.multivalued.StringReaderProvid= ers$TypeValueOf from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:24,108 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.server.impl.model.parameter.multivalued.StringReaderProvid= ers$TypeFromString from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:24,109 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.server.impl.model.parameter.multivalued.StringReaderProvid= ers$StringConstructor from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:24,109 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.server.impl.model.parameter.multivalued.StringReaderProvid= ers$DateProvider from sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:24,110 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class = com.sun.jersey.server.impl.model.parameter.multivalued.JAXBStringReaderPr= oviders$RootElementProvider from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:24,221 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = getResource(META-INF/services/javax.xml.bind.JAXBContext)=3Djar:file:/opt= /cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/jaxb-impl-2.2.3-1.jar!/= META-INF/services/javax.xml.bind.JAXBContext=0A= 2015-02-21 19:01:24,222 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.xml.bind.v2.ContextFactory from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:24,222 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: loaded = class com.sun.research.ws.wadl.ObjectFactory from = sun.misc.Launcher$AppClassLoader@7d487b8b=0A= 2015-02-21 19:01:24,460 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: RESPONSE = /ws/v1/mapreduce/jobs/job_1424550134651_0002 200=0A= 2015-02-21 19:01:24,462 DEBUG [970736822@qtp-700266387-0] = org.mortbay.log: EOF=0A= 2015-02-21 19:01:24,588 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #30=0A= 2015-02-21 19:01:24,589 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #30=0A= 2015-02-21 19:01:24,589 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:24,589 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:24,589 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold not met. completedMapsForReduceSlowstart 4=0A= 2015-02-21 19:01:24,868 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #81=0A= 2015-02-21 19:01:24,869 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#81 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:24,869 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:24,869 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:24,869 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#81 Retry#0=0A= 2015-02-21 19:01:24,869 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#81 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:24,870 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #82=0A= 2015-02-21 19:01:24,870 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#82 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:24,870 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:24,871 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:24,871 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#82 Retry#0=0A= 2015-02-21 19:01:24,871 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#82 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:24,872 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #83=0A= 2015-02-21 19:01:24,872 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#83 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:24,872 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:24,873 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:24,873 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#83 Retry#0=0A= 2015-02-21 19:01:24,873 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#83 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:25,056 DEBUG [IPC Server listener on 35954] = org.apache.hadoop.ipc.Server: Server connection from = 192.168.2.252:43596; # active connections: 1; # queued calls: 0=0A= 2015-02-21 19:01:25,147 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:25,148 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: Created SASL server with = mechanism =3D DIGEST-MD5=0A= 2015-02-21 19:01:25,149 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"7F/DsSCp1Vy6d8VfERh/du+YzFeM3vrwyOtSRCsw\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 19:01:25,149 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43596 Call#-33 Retry#-1=0A= 2015-02-21 19:01:25,149 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43596 Call#-33 Retry#-1 Wrote 178 = bytes.=0A= 2015-02-21 19:01:25,241 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:25,241 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Have read input token of size 270 for = processing by saslServer.evaluateResponse()=0A= 2015-02-21 19:01:25,242 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting password for client: job_1424550134651_0002 = (auth:SIMPLE)=0A= 2015-02-21 19:01:25,242 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting canonicalized client ID: job_1424550134651_0002=0A= 2015-02-21 19:01:25,243 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Will send SUCCESS token of size 40 from = saslServer.=0A= 2015-02-21 19:01:25,243 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server context established. = Negotiated QoP is auth=0A= 2015-02-21 19:01:25,243 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server successfully authenticated = client: job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:25,243 INFO [Socket Reader #1 for port 35954] = SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for = job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:25,243 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: SUCCESS=0A= token: "rspauth=3D60ee99c7a3678c79eea1d322f2a8dfff"=0A= =0A= 2015-02-21 19:01:25,243 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43596 Call#-33 Retry#-1=0A= 2015-02-21 19:01:25,243 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43596 Call#-33 Retry#-1 Wrote 64 = bytes.=0A= 2015-02-21 19:01:25,268 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-3=0A= 2015-02-21 19:01:25,269 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Successfully authorized userInfo {=0A= }=0A= protocol: "org.apache.hadoop.mapred.TaskUmbilicalProtocol"=0A= =0A= 2015-02-21 19:01:25,269 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #0=0A= 2015-02-21 19:01:25,274 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 35954: = getTask(org.apache.hadoop.mapred.JvmContext@5dcd8d66), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43596 Call#0 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:25,274 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:25,275 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.metrics2.lib.MutableRates: done=0A= 2015-02-21 19:01:25,275 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.metrics2.lib.MutableRates: fatalError=0A= 2015-02-21 19:01:25,276 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.metrics2.lib.MutableRates: getTask=0A= 2015-02-21 19:01:25,276 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.metrics2.lib.MutableRates: canCommit=0A= 2015-02-21 19:01:25,276 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.metrics2.lib.MutableRates: commitPending=0A= 2015-02-21 19:01:25,276 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.metrics2.lib.MutableRates: fsError=0A= 2015-02-21 19:01:25,276 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.metrics2.lib.MutableRates: shuffleError=0A= 2015-02-21 19:01:25,276 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.metrics2.lib.MutableRates: getMapCompletionEvents=0A= 2015-02-21 19:01:25,276 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.metrics2.lib.MutableRates: ping=0A= 2015-02-21 19:01:25,276 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.metrics2.lib.MutableRates: reportDiagnosticInfo=0A= 2015-02-21 19:01:25,276 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.metrics2.lib.MutableRates: statusUpdate=0A= 2015-02-21 19:01:25,276 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.metrics2.lib.MutableRates: reportNextRecordRange=0A= 2015-02-21 19:01:25,276 INFO [IPC Server handler 0 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : = jvm_1424550134651_0002_m_000002 asked for a task=0A= 2015-02-21 19:01:25,277 INFO [IPC Server handler 0 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: = jvm_1424550134651_0002_m_000002 given task: = attempt_1424550134651_0002_m_000003_0=0A= 2015-02-21 19:01:25,277 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.ipc.Server: Served: getTask queueTime=3D 1 = procesingTime=3D 2=0A= 2015-02-21 19:01:25,278 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@5dcd8d66), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43596 Call#0 Retry#0=0A= 2015-02-21 19:01:25,278 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@5dcd8d66), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43596 Call#0 Retry#0 Wrote 379 bytes.=0A= 2015-02-21 19:01:25,590 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #31=0A= 2015-02-21 19:01:25,591 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #31=0A= 2015-02-21 19:01:25,591 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 1ms=0A= 2015-02-21 19:01:25,592 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:25,592 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold not met. completedMapsForReduceSlowstart 4=0A= 2015-02-21 19:01:25,874 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #84=0A= 2015-02-21 19:01:25,875 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#84 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:25,875 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:25,875 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 1 procesingTime=3D 0=0A= 2015-02-21 19:01:25,875 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#84 Retry#0=0A= 2015-02-21 19:01:25,879 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #85=0A= 2015-02-21 19:01:25,880 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#84 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:25,880 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#85 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:25,880 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:25,881 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 1=0A= 2015-02-21 19:01:25,881 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#85 Retry#0=0A= 2015-02-21 19:01:25,881 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#85 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:25,882 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #86=0A= 2015-02-21 19:01:25,883 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#86 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:25,883 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:25,883 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:25,884 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#86 Retry#0=0A= 2015-02-21 19:01:25,885 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#86 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:26,593 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #32=0A= 2015-02-21 19:01:26,594 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #32=0A= 2015-02-21 19:01:26,594 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:26,595 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:26,595 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold not met. completedMapsForReduceSlowstart 4=0A= 2015-02-21 19:01:26,885 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #87=0A= 2015-02-21 19:01:26,885 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#87 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:26,886 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:26,886 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 1 procesingTime=3D 0=0A= 2015-02-21 19:01:26,886 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#87 Retry#0=0A= 2015-02-21 19:01:26,886 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#87 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:26,887 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #88=0A= 2015-02-21 19:01:26,887 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#88 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:26,887 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:26,888 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:26,888 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#88 Retry#0=0A= 2015-02-21 19:01:26,888 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#88 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:26,889 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #89=0A= 2015-02-21 19:01:26,889 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#89 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:26,889 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:26,890 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:26,890 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#89 Retry#0=0A= 2015-02-21 19:01:26,890 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#89 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:27,595 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #33=0A= 2015-02-21 19:01:27,597 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #33=0A= 2015-02-21 19:01:27,597 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:27,597 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:27,597 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold not met. completedMapsForReduceSlowstart 4=0A= 2015-02-21 19:01:27,713 DEBUG [IPC Server listener on 35954] = org.apache.hadoop.ipc.Server: Server connection from = 192.168.2.250:34282; # active connections: 2; # queued calls: 0=0A= 2015-02-21 19:01:27,891 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #90=0A= 2015-02-21 19:01:27,892 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#90 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:27,892 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:27,892 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 1 procesingTime=3D 0=0A= 2015-02-21 19:01:27,892 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#90 Retry#0=0A= 2015-02-21 19:01:27,892 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#90 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:27,893 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #91=0A= 2015-02-21 19:01:27,893 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#91 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:27,893 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:27,893 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:27,894 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#91 Retry#0=0A= 2015-02-21 19:01:27,894 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#91 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:27,895 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #92=0A= 2015-02-21 19:01:27,895 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#92 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:27,895 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:27,895 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:27,895 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#92 Retry#0=0A= 2015-02-21 19:01:27,896 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#92 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:27,913 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:27,913 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: Created SASL server with = mechanism =3D DIGEST-MD5=0A= 2015-02-21 19:01:27,913 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"Da3CWI7AsubYpuRaOlEUhy9jnNDU1GJmJBtRIuYk\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 19:01:27,914 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.250:34282 Call#-33 Retry#-1=0A= 2015-02-21 19:01:27,914 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.250:34282 Call#-33 Retry#-1 Wrote 178 = bytes.=0A= 2015-02-21 19:01:28,070 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:28,071 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Have read input token of size 270 for = processing by saslServer.evaluateResponse()=0A= 2015-02-21 19:01:28,071 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting password for client: job_1424550134651_0002 = (auth:SIMPLE)=0A= 2015-02-21 19:01:28,072 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting canonicalized client ID: job_1424550134651_0002=0A= 2015-02-21 19:01:28,072 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Will send SUCCESS token of size 40 from = saslServer.=0A= 2015-02-21 19:01:28,072 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server context established. = Negotiated QoP is auth=0A= 2015-02-21 19:01:28,072 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server successfully authenticated = client: job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:28,072 INFO [Socket Reader #1 for port 35954] = SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for = job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:28,072 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: SUCCESS=0A= token: "rspauth=3Da39977383deb70b2b6d841f60ea5d711"=0A= =0A= 2015-02-21 19:01:28,072 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.250:34282 Call#-33 Retry#-1=0A= 2015-02-21 19:01:28,072 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.250:34282 Call#-33 Retry#-1 Wrote 64 = bytes.=0A= 2015-02-21 19:01:28,103 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-3=0A= 2015-02-21 19:01:28,103 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Successfully authorized userInfo {=0A= }=0A= protocol: "org.apache.hadoop.mapred.TaskUmbilicalProtocol"=0A= =0A= 2015-02-21 19:01:28,103 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #0=0A= 2015-02-21 19:01:28,104 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 35954: = getTask(org.apache.hadoop.mapred.JvmContext@701f714b), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34282 Call#0 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:28,104 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:28,104 INFO [IPC Server handler 1 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : = jvm_1424550134651_0002_m_000003 asked for a task=0A= 2015-02-21 19:01:28,104 INFO [IPC Server handler 1 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: = jvm_1424550134651_0002_m_000003 given task: = attempt_1424550134651_0002_m_000004_0=0A= 2015-02-21 19:01:28,105 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: Served: getTask queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:28,105 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@701f714b), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34282 Call#0 Retry#0=0A= 2015-02-21 19:01:28,105 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@701f714b), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34282 Call#0 Retry#0 Wrote 379 bytes.=0A= 2015-02-21 19:01:28,196 DEBUG [IPC Server idle connection scanner for = port 59910] org.apache.hadoop.ipc.Server: IPC Server idle connection = scanner for port 59910: task running=0A= 2015-02-21 19:01:28,272 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #4=0A= 2015-02-21 19:01:28,274 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 35954: = statusUpdate(attempt_1424550134651_0002_m_000003_0, = org.apache.hadoop.mapred.MapTaskStatus@5d3183e3), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43596 Call#4 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:28,274 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:28,274 INFO [IPC Server handler 0 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_m_000003_0 is : 0.0=0A= 2015-02-21 19:01:28,277 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 0 = procesingTime=3D 3=0A= 2015-02-21 19:01:28,277 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:28,277 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000003_0 of type TA_UPDATE=0A= 2015-02-21 19:01:28,277 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000003_0, = org.apache.hadoop.mapred.MapTaskStatus@5d3183e3), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43596 Call#4 Retry#0=0A= 2015-02-21 19:01:28,278 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000003_0, = org.apache.hadoop.mapred.MapTaskStatus@5d3183e3), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43596 Call#4 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:28,281 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:28,589 DEBUG [IPC Server listener on 35954] = org.apache.hadoop.ipc.Server: Server connection from = 192.168.2.251:58635; # active connections: 3; # queued calls: 0=0A= 2015-02-21 19:01:28,598 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #34=0A= 2015-02-21 19:01:28,599 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #34=0A= 2015-02-21 19:01:28,599 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 1ms=0A= 2015-02-21 19:01:28,600 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:28,600 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold not met. completedMapsForReduceSlowstart 4=0A= 2015-02-21 19:01:28,683 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #6=0A= 2015-02-21 19:01:28,688 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 35954: = statusUpdate(attempt_1424550134651_0002_m_000003_0, = org.apache.hadoop.mapred.MapTaskStatus@450f4f9e), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43596 Call#6 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:28,688 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:28,689 INFO [IPC Server handler 1 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_m_000003_0 is : 1.0=0A= 2015-02-21 19:01:28,693 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 0 = procesingTime=3D 5=0A= 2015-02-21 19:01:28,693 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:28,693 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000003_0 of type TA_UPDATE=0A= 2015-02-21 19:01:28,693 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000003_0, = org.apache.hadoop.mapred.MapTaskStatus@450f4f9e), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43596 Call#6 Retry#0=0A= 2015-02-21 19:01:28,694 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000003_0, = org.apache.hadoop.mapred.MapTaskStatus@450f4f9e), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43596 Call#6 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:28,694 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:28,694 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #7=0A= 2015-02-21 19:01:28,695 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: = done(attempt_1424550134651_0002_m_000003_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43596 = Call#7 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:28,695 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:28,695 INFO [IPC Server handler 2 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement = from attempt_1424550134651_0002_m_000003_0=0A= 2015-02-21 19:01:28,695 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: Served: done queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:28,695 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_DONE=0A= 2015-02-21 19:01:28,695 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000003_0 of type TA_DONE=0A= 2015-02-21 19:01:28,695 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: responding = to done(attempt_1424550134651_0002_m_000003_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43596 = Call#7 Retry#0=0A= 2015-02-21 19:01:28,696 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: responding = to done(attempt_1424550134651_0002_m_000003_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43596 = Call#7 Retry#0 Wrote 118 bytes.=0A= 2015-02-21 19:01:28,696 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000003_0 TaskAttempt Transitioned from = RUNNING to SUCCESS_CONTAINER_CLEANUP=0A= 2015-02-21 19:01:28,696 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherEvent.EventT= ype: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000002 taskAttempt = attempt_1424550134651_0002_m_000003_0=0A= 2015-02-21 19:01:28,697 INFO [ContainerLauncher #3] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000002 taskAttempt = attempt_1424550134651_0002_m_000003_0=0A= 2015-02-21 19:01:28,697 INFO [ContainerLauncher #3] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = KILLING attempt_1424550134651_0002_m_000003_0=0A= 2015-02-21 19:01:28,698 INFO [ContainerLauncher #3] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:28,698 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@40614c7b)=0A= 2015-02-21 19:01:28,698 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:28,698 DEBUG [ContainerLauncher #3] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:28,699 DEBUG [ContainerLauncher #3] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:28,702 DEBUG [ContainerLauncher #3] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:28,702 DEBUG [ContainerLauncher #3] = org.apache.hadoop.ipc.Client: Connecting to = hadoop3.rdpratti.com/192.168.2.252:8041=0A= 2015-02-21 19:01:28,702 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:28,703 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:28,705 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"sxwqRDKGXlilZ1PQDU9rct0bKg5wuZ1zfbWM1uLX\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:28,706 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@1d659= 6ca=0A= 2015-02-21 19:01:28,706 INFO [ContainerLauncher #3] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.252:8041. Current token is Kind: NMToken, Service: = 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@4d08e941)=0A= 2015-02-21 19:01:28,706 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:28,707 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = disconnecting client 192.168.2.252:43596. Number of active connections: 2=0A= 2015-02-21 19:01:28,707 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:28,707 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:28,707 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:28,707 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:28,708 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"sxwqRDK= GXlilZ1PQDU9rct0bKg5wuZ1zfbWM1uLX\",nc=3D00000001,cnonce=3D\"HYqp13mI6rxH= 3cOSUPLSdoHzgqk25y7hgw7dACR8\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3D428a5f644644574e1aedaf1a47d15295,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:28,710 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3Dc46eb6f1f8291dd7ff5e867a73ab64b3"=0A= =0A= 2015-02-21 19:01:28,710 DEBUG [ContainerLauncher #3] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:28,711 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 3=0A= 2015-02-21 19:01:28,711 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 sending #35=0A= 2015-02-21 19:01:28,731 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 got value #35=0A= 2015-02-21 19:01:28,731 DEBUG [ContainerLauncher #3] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: stopContainers took 29ms=0A= 2015-02-21 19:01:28,731 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:28,731 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 2=0A= 2015-02-21 19:01:28,735 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:28,735 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000003_0 of type TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:28,736 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000003_0 TaskAttempt Transitioned from = SUCCESS_CONTAINER_CLEANUP to SUCCEEDED=0A= 2015-02-21 19:01:28,736 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:28,736 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:28,736 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = MAP_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:28,737 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:28,737 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:28,737 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_m_000003 of type T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:28,745 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded = with attempt attempt_1424550134651_0002_m_000003_0=0A= 2015-02-21 19:01:28,746 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_m_000003 Task Transitioned from RUNNING to = SUCCEEDED=0A= 2015-02-21 19:01:28,746 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:28,746 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskAttemptCompletedEvent= .EventType: JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:28,746 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:28,748 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:28,748 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskEvent.EventType: = JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:28,748 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:28,748 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 15410 lastFlushOffset 0=0A= 2015-02-21 19:01:28,748 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed = Tasks: 1=0A= 2015-02-21 19:01:28,748 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 0=0A= 2015-02-21 19:01:28,748 DEBUG [Thread-54] = org.apache.hadoop.hdfs.DFSClient: Allocating new block=0A= 2015-02-21 19:01:28,748 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_FINISHED=0A= 2015-02-21 19:01:28,749 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 0=0A= 2015-02-21 19:01:28,749 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #36=0A= 2015-02-21 19:01:28,775 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #36=0A= 2015-02-21 19:01:28,775 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:28,776 DEBUG [Thread-54] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: addBlock took 28ms=0A= 2015-02-21 19:01:28,776 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: Created SASL server with = mechanism =3D DIGEST-MD5=0A= 2015-02-21 19:01:28,776 DEBUG [Thread-54] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.253:50010=0A= 2015-02-21 19:01:28,776 DEBUG [Thread-54] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.252:50010=0A= 2015-02-21 19:01:28,776 DEBUG [Thread-54] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.251:50010=0A= 2015-02-21 19:01:28,776 DEBUG [Thread-54] = org.apache.hadoop.hdfs.DFSClient: Connecting to datanode = 192.168.2.253:50010=0A= 2015-02-21 19:01:28,776 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"DyHG8/epsq0fem0tEXdepAkU8nT3PUMFDSr4IrkS\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 19:01:28,776 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.251:58635 Call#-33 Retry#-1=0A= 2015-02-21 19:01:28,776 DEBUG [Thread-54] = org.apache.hadoop.hdfs.DFSClient: Send buf size 124928=0A= 2015-02-21 19:01:28,776 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.251:58635 Call#-33 Retry#-1 Wrote 178 = bytes.=0A= 2015-02-21 19:01:28,776 DEBUG [Thread-54] = org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient:= SASL client skipping handshake in unsecured configuration for addr =3D = /192.168.2.253, datanodeId =3D 192.168.2.253:50010=0A= 2015-02-21 19:01:28,791 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false = lastByteOffsetInBlock: 15410=0A= 2015-02-21 19:01:28,799 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 6901599=0A= 2015-02-21 19:01:28,801 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #37=0A= 2015-02-21 19:01:28,809 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #37=0A= 2015-02-21 19:01:28,809 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: fsync took 8ms=0A= 2015-02-21 19:01:28,811 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler MAP_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:28,811 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:28,815 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D1, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D15360=0A= 2015-02-21 19:01:28,815 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:28,815 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 18142 lastFlushOffset 15410=0A= 2015-02-21 19:01:28,815 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 1=0A= 2015-02-21 19:01:28,815 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 1=0A= 2015-02-21 19:01:28,815 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:1 offsetInBlock:15360 lastPacketInBlock:false = lastByteOffsetInBlock: 18142=0A= 2015-02-21 19:01:28,818 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1890827=0A= 2015-02-21 19:01:28,818 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_FINISHED=0A= 2015-02-21 19:01:28,897 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #93=0A= 2015-02-21 19:01:28,897 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#93 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:28,897 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:28,898 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 1=0A= 2015-02-21 19:01:28,898 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#93 Retry#0=0A= 2015-02-21 19:01:28,898 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#93 Retry#0 Wrote 101 bytes.=0A= 2015-02-21 19:01:28,908 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #94=0A= 2015-02-21 19:01:28,908 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#94 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:28,909 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:28,909 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:28,909 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#94 Retry#0=0A= 2015-02-21 19:01:28,909 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#94 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:28,910 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #95=0A= 2015-02-21 19:01:28,910 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#95 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:28,911 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:28,911 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:28,911 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#95 Retry#0=0A= 2015-02-21 19:01:28,911 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#95 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:28,945 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:28,945 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Have read input token of size 270 for = processing by saslServer.evaluateResponse()=0A= 2015-02-21 19:01:28,945 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting password for client: job_1424550134651_0002 = (auth:SIMPLE)=0A= 2015-02-21 19:01:28,946 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting canonicalized client ID: job_1424550134651_0002=0A= 2015-02-21 19:01:28,946 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Will send SUCCESS token of size 40 from = saslServer.=0A= 2015-02-21 19:01:28,946 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server context established. = Negotiated QoP is auth=0A= 2015-02-21 19:01:28,946 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server successfully authenticated = client: job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:28,946 INFO [Socket Reader #1 for port 35954] = SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for = job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:28,946 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: SUCCESS=0A= token: "rspauth=3D60625e2919ae69d6440e12efef51b334"=0A= =0A= 2015-02-21 19:01:28,946 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.251:58635 Call#-33 Retry#-1=0A= 2015-02-21 19:01:28,946 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.251:58635 Call#-33 Retry#-1 Wrote 64 = bytes.=0A= 2015-02-21 19:01:28,978 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-3=0A= 2015-02-21 19:01:28,978 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Successfully authorized userInfo {=0A= }=0A= protocol: "org.apache.hadoop.mapred.TaskUmbilicalProtocol"=0A= =0A= 2015-02-21 19:01:28,978 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #0=0A= 2015-02-21 19:01:28,979 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 5 on 35954: = getTask(org.apache.hadoop.mapred.JvmContext@464ef2d7), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58635 Call#0 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:28,979 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:28,979 INFO [IPC Server handler 5 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : = jvm_1424550134651_0002_m_000004 asked for a task=0A= 2015-02-21 19:01:28,979 INFO [IPC Server handler 5 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: = jvm_1424550134651_0002_m_000004 given task: = attempt_1424550134651_0002_m_000000_0=0A= 2015-02-21 19:01:28,979 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.ipc.Server: Served: getTask queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:28,980 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 5 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@464ef2d7), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58635 Call#0 Retry#0=0A= 2015-02-21 19:01:28,980 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 5 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@464ef2d7), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58635 Call#0 Retry#0 Wrote 377 bytes.=0A= 2015-02-21 19:01:29,177 DEBUG [IPC Server idle connection scanner for = port 35954] org.apache.hadoop.ipc.Server: IPC Server idle connection = scanner for port 35954: task running=0A= 2015-02-21 19:01:29,600 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before = Scheduling: PendingReds:4 ScheduledMaps:2 ScheduledReds:0 AssignedMaps:3 = AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:3 ContRel:0 = HostLocal:2 RackLocal:1=0A= 2015-02-21 19:01:29,601 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #38=0A= 2015-02-21 19:01:29,602 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #38=0A= 2015-02-21 19:01:29,602 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:29,602 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:29,602 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold not met. completedMapsForReduceSlowstart 4=0A= 2015-02-21 19:01:29,913 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #96=0A= 2015-02-21 19:01:29,913 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#96 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:29,913 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:29,913 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:29,914 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#96 Retry#0=0A= 2015-02-21 19:01:29,914 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#96 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:29,914 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #97=0A= 2015-02-21 19:01:29,914 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#97 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:29,915 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:29,915 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:29,915 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#97 Retry#0=0A= 2015-02-21 19:01:29,915 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#97 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:29,916 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #98=0A= 2015-02-21 19:01:29,916 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#98 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:29,916 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:29,917 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:29,917 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#98 Retry#0=0A= 2015-02-21 19:01:29,917 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#98 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:30,603 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #39=0A= 2015-02-21 19:01:30,608 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #39=0A= 2015-02-21 19:01:30,608 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 5ms=0A= 2015-02-21 19:01:30,612 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received new = Container :Container: [ContainerId: = container_1424550134651_0002_01_000005, NodeId: = hadoop3.rdpratti.com:8041, NodeHttpAddress: hadoop3.rdpratti.com:8042, = Resource: , Priority: 20, Token: Token { kind: = ContainerToken, service: 192.168.2.252:8041 }, ]=0A= 2015-02-21 19:01:30,612 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received = completed container container_1424550134651_0002_01_000002=0A= 2015-02-21 19:01:30,612 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:30,612 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000003_0 of type TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:30,613 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got = allocated containers 1=0A= 2015-02-21 19:01:30,613 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptDiagnosticsUpdate= Event.EventType: TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:30,613 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning = container container_1424550134651_0002_01_000005 with priority 20 to NM = hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:30,613 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000003_0 of type TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:30,613 INFO [RMCommunicator Allocator] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop3.rdpratti.com = to /default=0A= 2015-02-21 19:01:30,613 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3Dhadoop0.rdpratti.com numContainers=3D2 #asks=3D0=0A= 2015-02-21 19:01:30,613 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics = report from attempt_1424550134651_0002_m_000003_0: Container killed by = the ApplicationMaster.=0A= Container killed on request. Exit code is 143=0A= Container exited with a non-zero exit code 143=0A= =0A= 2015-02-21 19:01:30,613 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3Dhadoop0.rdpratti.com numContainers=3D1 #asks=3D1=0A= 2015-02-21 19:01:30,613 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3D/default numContainers=3D2 #asks=3D1=0A= 2015-02-21 19:01:30,613 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3D/default numContainers=3D1 #asks=3D2=0A= 2015-02-21 19:01:30,613 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D20 resourceName=3D* = numContainers=3D2 #asks=3D2=0A= 2015-02-21 19:01:30,613 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D20 resourceName=3D* = numContainers=3D1 #asks=3D3=0A= 2015-02-21 19:01:30,613 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerAssigned= Event.EventType: TA_ASSIGNED=0A= 2015-02-21 19:01:30,613 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container container_1424550134651_0002_01_000005 to = attempt_1424550134651_0002_m_000001_0=0A= 2015-02-21 19:01:30,613 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000001_0 of type TA_ASSIGNED=0A= 2015-02-21 19:01:30,613 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapred.SortedRanges: currentIndex 0 0:0=0A= 2015-02-21 19:01:30,613 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container (Container: [ContainerId: = container_1424550134651_0002_01_000005, NodeId: = hadoop3.rdpratti.com:8041, NodeHttpAddress: hadoop3.rdpratti.com:8042, = Resource: , Priority: 20, Token: Token { kind: = ContainerToken, service: 192.168.2.252:8041 }, ]) to task = attempt_1424550134651_0002_m_000001_0 on node hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:30,613 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = based on rack match /default=0A= 2015-02-21 19:01:30,614 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:30,614 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold not met. completedMapsForReduceSlowstart 4=0A= 2015-02-21 19:01:30,614 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After = Scheduling: PendingReds:4 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:3 = AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:4 ContRel:0 = HostLocal:2 RackLocal:2=0A= 2015-02-21 19:01:30,614 INFO [AsyncDispatcher event handler] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop3.rdpratti.com = to /default=0A= 2015-02-21 19:01:30,614 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000001_0 TaskAttempt Transitioned from = UNASSIGNED to ASSIGNED=0A= 2015-02-21 19:01:30,614 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:30,614 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:30,614 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerRemoteLaunchEvent.Ev= entType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000005 taskAttempt = attempt_1424550134651_0002_m_000001_0=0A= 2015-02-21 19:01:30,614 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:30,615 INFO [ContainerLauncher #4] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000005 taskAttempt = attempt_1424550134651_0002_m_000001_0=0A= 2015-02-21 19:01:30,615 INFO [ContainerLauncher #4] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Launching attempt_1424550134651_0002_m_000001_0=0A= 2015-02-21 19:01:30,615 INFO [ContainerLauncher #4] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:30,616 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@2d7b9d2a)=0A= 2015-02-21 19:01:30,616 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:30,616 DEBUG [ContainerLauncher #4] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:30,616 DEBUG [ContainerLauncher #4] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:30,625 DEBUG [ContainerLauncher #4] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:30,625 DEBUG [ContainerLauncher #4] = org.apache.hadoop.ipc.Client: Connecting to = hadoop3.rdpratti.com/192.168.2.252:8041=0A= 2015-02-21 19:01:30,626 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:30,626 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:30,627 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"7T3Xxk+Z6WGMZcsXTodkn2QlJJUCL+yZWEqT7PzF\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:30,627 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@1d5a0= 6ba=0A= 2015-02-21 19:01:30,627 INFO [ContainerLauncher #4] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.252:8041. Current token is Kind: NMToken, Service: = 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@16fac64)=0A= 2015-02-21 19:01:30,627 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:30,628 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:30,628 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:30,628 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:30,628 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:30,629 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"7T3Xxk+= Z6WGMZcsXTodkn2QlJJUCL+yZWEqT7PzF\",nc=3D00000001,cnonce=3D\"ImjVb933EbVz= jz6NVf2Cuyja5l+GczSDeCLv67g/\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3Df48fbf80666013667a1917d28e4dca44,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:30,631 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D42c3083f6e16386bf09b133e3b91bbe2"=0A= =0A= 2015-02-21 19:01:30,631 DEBUG [ContainerLauncher #4] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:30,633 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 sending #40=0A= 2015-02-21 19:01:30,633 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 3=0A= 2015-02-21 19:01:30,635 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 got value #40=0A= 2015-02-21 19:01:30,636 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:30,636 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 2=0A= 2015-02-21 19:01:30,636 DEBUG [ContainerLauncher #4] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: startContainers took 11ms=0A= 2015-02-21 19:01:30,636 INFO [ContainerLauncher #4] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Shuffle port returned by ContainerManager for = attempt_1424550134651_0002_m_000001_0 : 13562=0A= 2015-02-21 19:01:30,636 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerLaunched= Event.EventType: TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:30,636 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000001_0 of type TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:30,636 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = TaskAttempt: [attempt_1424550134651_0002_m_000001_0] using containerId: = [container_1424550134651_0002_01_000005 on NM: = [hadoop3.rdpratti.com:8041]=0A= 2015-02-21 19:01:30,637 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000001_0 TaskAttempt Transitioned from = ASSIGNED to RUNNING=0A= 2015-02-21 19:01:30,637 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:30,637 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:30,637 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = MAP_ATTEMPT_STARTED=0A= 2015-02-21 19:01:30,637 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_START=0A= 2015-02-21 19:01:30,637 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:30,637 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:30,637 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_m_000001 of type T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:30,637 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_m_000001 Task Transitioned from SCHEDULED to = RUNNING=0A= 2015-02-21 19:01:30,637 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D2, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D17920=0A= 2015-02-21 19:01:30,637 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler MAP_ATTEMPT_STARTED=0A= 2015-02-21 19:01:30,918 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #99=0A= 2015-02-21 19:01:30,919 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#99 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:30,919 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:30,919 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 1 procesingTime=3D 0=0A= 2015-02-21 19:01:30,919 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#99 Retry#0=0A= 2015-02-21 19:01:30,919 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#99 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:30,920 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #100=0A= 2015-02-21 19:01:30,920 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#100 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:30,920 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:30,921 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:30,921 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#100 Retry#0=0A= 2015-02-21 19:01:30,921 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#100 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:30,922 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #101=0A= 2015-02-21 19:01:30,922 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#101 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:30,922 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:30,922 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:30,922 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#101 Retry#0=0A= 2015-02-21 19:01:30,923 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#101 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:31,614 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #41=0A= 2015-02-21 19:01:31,616 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #41=0A= 2015-02-21 19:01:31,616 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:31,616 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = getResources() for application_1424550134651_0002: ask=3D3 release=3D 0 = newContainers=3D0 finishedContainers=3D0 resourcelimit=3D knownNMs=3D4=0A= 2015-02-21 19:01:31,616 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:31,616 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold not met. completedMapsForReduceSlowstart 4=0A= 2015-02-21 19:01:31,924 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #102=0A= 2015-02-21 19:01:31,924 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#102 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:31,924 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:31,925 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:31,925 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#102 Retry#0=0A= 2015-02-21 19:01:31,925 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#102 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:31,925 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #103=0A= 2015-02-21 19:01:31,926 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#103 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:31,926 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:31,926 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:31,926 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#103 Retry#0=0A= 2015-02-21 19:01:31,926 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#103 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:31,927 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #104=0A= 2015-02-21 19:01:31,927 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#104 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:31,927 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:31,928 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:31,928 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#104 Retry#0=0A= 2015-02-21 19:01:31,928 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#104 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:32,357 DEBUG [IPC Server listener on 35954] = org.apache.hadoop.ipc.Server: Server connection from = 192.168.2.252:43600; # active connections: 3; # queued calls: 0=0A= 2015-02-21 19:01:32,398 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #4=0A= 2015-02-21 19:01:32,399 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 35954: = statusUpdate(attempt_1424550134651_0002_m_000004_0, = org.apache.hadoop.mapred.MapTaskStatus@3fcb6d4c), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34282 Call#4 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:32,399 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:32,400 INFO [IPC Server handler 1 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_m_000004_0 is : 0.0=0A= 2015-02-21 19:01:32,400 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:32,400 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:32,400 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000004_0, = org.apache.hadoop.mapred.MapTaskStatus@3fcb6d4c), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34282 Call#4 Retry#0=0A= 2015-02-21 19:01:32,400 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000004_0 of type TA_UPDATE=0A= 2015-02-21 19:01:32,400 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000004_0, = org.apache.hadoop.mapred.MapTaskStatus@3fcb6d4c), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34282 Call#4 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:32,400 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:32,447 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:32,447 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: Created SASL server with = mechanism =3D DIGEST-MD5=0A= 2015-02-21 19:01:32,447 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"K490XAX8Ugggg9eCyi+dEAc2SKTps8ktpdeAmB7l\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 19:01:32,448 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43600 Call#-33 Retry#-1=0A= 2015-02-21 19:01:32,448 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43600 Call#-33 Retry#-1 Wrote 178 = bytes.=0A= 2015-02-21 19:01:32,573 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:32,573 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Have read input token of size 270 for = processing by saslServer.evaluateResponse()=0A= 2015-02-21 19:01:32,573 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting password for client: job_1424550134651_0002 = (auth:SIMPLE)=0A= 2015-02-21 19:01:32,574 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting canonicalized client ID: job_1424550134651_0002=0A= 2015-02-21 19:01:32,574 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Will send SUCCESS token of size 40 from = saslServer.=0A= 2015-02-21 19:01:32,574 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server context established. = Negotiated QoP is auth=0A= 2015-02-21 19:01:32,574 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server successfully authenticated = client: job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:32,574 INFO [Socket Reader #1 for port 35954] = SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for = job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:32,575 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: SUCCESS=0A= token: "rspauth=3D515e830d682983e2b9a555ff327db97e"=0A= =0A= 2015-02-21 19:01:32,575 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43600 Call#-33 Retry#-1=0A= 2015-02-21 19:01:32,575 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43600 Call#-33 Retry#-1 Wrote 64 = bytes.=0A= 2015-02-21 19:01:32,590 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-3=0A= 2015-02-21 19:01:32,591 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Successfully authorized userInfo {=0A= }=0A= protocol: "org.apache.hadoop.mapred.TaskUmbilicalProtocol"=0A= =0A= 2015-02-21 19:01:32,591 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #0=0A= 2015-02-21 19:01:32,591 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: = getTask(org.apache.hadoop.mapred.JvmContext@128df71f), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43600 Call#0 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:32,591 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:32,591 INFO [IPC Server handler 2 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : = jvm_1424550134651_0002_m_000005 asked for a task=0A= 2015-02-21 19:01:32,591 INFO [IPC Server handler 2 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: = jvm_1424550134651_0002_m_000005 given task: = attempt_1424550134651_0002_m_000001_0=0A= 2015-02-21 19:01:32,591 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: Served: getTask queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:32,592 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@128df71f), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43600 Call#0 Retry#0=0A= 2015-02-21 19:01:32,592 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@128df71f), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43600 Call#0 Retry#0 Wrote 378 bytes.=0A= 2015-02-21 19:01:32,617 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #42=0A= 2015-02-21 19:01:32,619 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #42=0A= 2015-02-21 19:01:32,619 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:32,619 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:32,619 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold not met. completedMapsForReduceSlowstart 4=0A= 2015-02-21 19:01:32,929 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #105=0A= 2015-02-21 19:01:32,930 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#105 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:32,930 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:32,930 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 1 procesingTime=3D 0=0A= 2015-02-21 19:01:32,930 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#105 Retry#0=0A= 2015-02-21 19:01:32,930 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#105 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:32,931 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #106=0A= 2015-02-21 19:01:32,931 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#106 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:32,931 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:32,931 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:32,932 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#106 Retry#0=0A= 2015-02-21 19:01:32,932 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#106 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:32,933 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #107=0A= 2015-02-21 19:01:32,933 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#107 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:32,933 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:32,933 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:32,933 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#107 Retry#0=0A= 2015-02-21 19:01:32,933 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#107 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:32,982 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #6=0A= 2015-02-21 19:01:32,982 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: = statusUpdate(attempt_1424550134651_0002_m_000004_0, = org.apache.hadoop.mapred.MapTaskStatus@755629fd), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34282 Call#6 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:32,982 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:32,983 INFO [IPC Server handler 6 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_m_000004_0 is : 1.0=0A= 2015-02-21 19:01:32,984 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 1 = procesingTime=3D 1=0A= 2015-02-21 19:01:32,984 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:32,984 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000004_0, = org.apache.hadoop.mapred.MapTaskStatus@755629fd), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34282 Call#6 Retry#0=0A= 2015-02-21 19:01:32,984 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000004_0 of type TA_UPDATE=0A= 2015-02-21 19:01:32,985 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000004_0, = org.apache.hadoop.mapred.MapTaskStatus@755629fd), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34282 Call#6 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:32,985 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:32,986 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #7=0A= 2015-02-21 19:01:32,986 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: = done(attempt_1424550134651_0002_m_000004_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34282 = Call#7 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:32,986 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:32,987 INFO [IPC Server handler 8 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement = from attempt_1424550134651_0002_m_000004_0=0A= 2015-02-21 19:01:32,987 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: Served: done queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:32,987 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_DONE=0A= 2015-02-21 19:01:32,988 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000004_0 of type TA_DONE=0A= 2015-02-21 19:01:32,988 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: responding = to done(attempt_1424550134651_0002_m_000004_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34282 = Call#7 Retry#0=0A= 2015-02-21 19:01:32,988 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000004_0 TaskAttempt Transitioned from = RUNNING to SUCCESS_CONTAINER_CLEANUP=0A= 2015-02-21 19:01:32,988 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: responding = to done(attempt_1424550134651_0002_m_000004_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34282 = Call#7 Retry#0 Wrote 118 bytes.=0A= 2015-02-21 19:01:32,988 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherEvent.EventT= ype: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000003 taskAttempt = attempt_1424550134651_0002_m_000004_0=0A= 2015-02-21 19:01:32,988 INFO [ContainerLauncher #5] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000003 taskAttempt = attempt_1424550134651_0002_m_000004_0=0A= 2015-02-21 19:01:32,989 INFO [ContainerLauncher #5] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = KILLING attempt_1424550134651_0002_m_000004_0=0A= 2015-02-21 19:01:32,989 INFO [ContainerLauncher #5] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop1.rdpratti.com:8041=0A= 2015-02-21 19:01:32,989 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.250:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@2fa77e9a)=0A= 2015-02-21 19:01:32,989 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:32,989 DEBUG [ContainerLauncher #5] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:32,990 DEBUG [ContainerLauncher #5] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:32,990 DEBUG [ContainerLauncher #5] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:32,990 DEBUG [ContainerLauncher #5] = org.apache.hadoop.ipc.Client: Connecting to = hadoop1.rdpratti.com/192.168.2.250:8041=0A= 2015-02-21 19:01:32,990 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:32,991 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:32,992 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"P0Cg6LqeugdfHInyPhBnADvZ//GUL7H/73eWAK5q\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:32,992 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@2b6bd= de6=0A= 2015-02-21 19:01:32,992 INFO [ContainerLauncher #5] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.250:8041. Current token is Kind: NMToken, Service: = 192.168.2.250:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@36b53d4f)=0A= 2015-02-21 19:01:32,992 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:32,993 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:32,993 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMS5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:32,993 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:32,993 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:32,994 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMS5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"P0Cg6Lq= eugdfHInyPhBnADvZ//GUL7H/73eWAK5q\",nc=3D00000001,cnonce=3D\"uS+n+ku2PiMe= h/ciPugLHiVVNww0ktwDOb09CmHM\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3Dfab34d290f0543076516e25a6c5245f2,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:32,997 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3De94bd3d82276fa055ea228a307f3656b"=0A= =0A= 2015-02-21 19:01:32,997 DEBUG [ContainerLauncher #5] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:32,997 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 3=0A= 2015-02-21 19:01:32,998 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001 sending #43=0A= 2015-02-21 19:01:33,007 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = disconnecting client 192.168.2.250:34282. Number of active connections: 2=0A= 2015-02-21 19:01:33,045 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001 got value #43=0A= 2015-02-21 19:01:33,045 DEBUG [ContainerLauncher #5] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: stopContainers took 55ms=0A= 2015-02-21 19:01:33,045 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:33,045 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 2=0A= 2015-02-21 19:01:33,046 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:33,046 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000004_0 of type TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:33,046 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000004_0 TaskAttempt Transitioned from = SUCCESS_CONTAINER_CLEANUP to SUCCEEDED=0A= 2015-02-21 19:01:33,046 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:33,046 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:33,046 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = MAP_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:33,046 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:33,046 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:33,046 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_m_000004 of type T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:33,046 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded = with attempt attempt_1424550134651_0002_m_000004_0=0A= 2015-02-21 19:01:33,047 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_m_000004 Task Transitioned from RUNNING to = SUCCEEDED=0A= 2015-02-21 19:01:33,047 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:33,047 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskAttemptCompletedEvent= .EventType: JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:33,047 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:33,047 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskEvent.EventType: = JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:33,047 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:33,047 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed = Tasks: 2=0A= 2015-02-21 19:01:33,047 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_FINISHED=0A= 2015-02-21 19:01:33,049 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:33,049 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 21731 lastFlushOffset 18142=0A= 2015-02-21 19:01:33,049 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 2=0A= 2015-02-21 19:01:33,049 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:2 offsetInBlock:17920 lastPacketInBlock:false = lastByteOffsetInBlock: 21731=0A= 2015-02-21 19:01:33,049 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 2=0A= 2015-02-21 19:01:33,052 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 2 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1663810=0A= 2015-02-21 19:01:33,052 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler MAP_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:33,052 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:33,054 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D3, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D21504=0A= 2015-02-21 19:01:33,054 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:33,054 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 24458 lastFlushOffset 21731=0A= 2015-02-21 19:01:33,055 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 3=0A= 2015-02-21 19:01:33,055 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:3 offsetInBlock:21504 lastPacketInBlock:false = lastByteOffsetInBlock: 24458=0A= 2015-02-21 19:01:33,055 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 3=0A= 2015-02-21 19:01:33,057 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 3 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1566757=0A= 2015-02-21 19:01:33,057 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_FINISHED=0A= 2015-02-21 19:01:33,619 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before = Scheduling: PendingReds:4 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:3 = AssignedReds:0 CompletedMaps:2 CompletedReds:0 ContAlloc:4 ContRel:0 = HostLocal:2 RackLocal:2=0A= 2015-02-21 19:01:33,620 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #44=0A= 2015-02-21 19:01:33,621 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #44=0A= 2015-02-21 19:01:33,621 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 1ms=0A= 2015-02-21 19:01:33,622 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:33,622 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold not met. completedMapsForReduceSlowstart 4=0A= 2015-02-21 19:01:33,935 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #108=0A= 2015-02-21 19:01:33,935 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#108 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:33,935 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:33,936 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:33,936 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#108 Retry#0=0A= 2015-02-21 19:01:33,936 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#108 Retry#0 Wrote 101 bytes.=0A= 2015-02-21 19:01:33,937 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #109=0A= 2015-02-21 19:01:33,937 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#109 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:33,937 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:33,938 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:33,938 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#109 Retry#0=0A= 2015-02-21 19:01:33,938 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#109 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:33,939 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #110=0A= 2015-02-21 19:01:33,939 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#110 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:33,939 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:33,939 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:33,940 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#110 Retry#0=0A= 2015-02-21 19:01:33,940 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#110 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:34,481 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: REQUEST = /ws/v1/mapreduce/jobs/job_1424550134651_0002 on = org.mortbay.jetty.HttpConnection@5295f395=0A= 2015-02-21 19:01:34,481 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = sessionManager=3Dorg.mortbay.jetty.servlet.HashSessionManager@4befbfaf=0A= 2015-02-21 19:01:34,481 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = session=3Dnull=0A= 2015-02-21 19:01:34,481 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = servlet=3Ddefault=0A= 2015-02-21 19:01:34,481 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = chain=3DNoCacheFilter->safety->AM_PROXY_FILTER->guice->default=0A= 2015-02-21 19:01:34,481 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: servlet = holder=3Ddefault=0A= 2015-02-21 19:01:34,481 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter NoCacheFilter=0A= 2015-02-21 19:01:34,482 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter safety=0A= 2015-02-21 19:01:34,482 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter AM_PROXY_FILTER=0A= 2015-02-21 19:01:34,482 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] = org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter: Remote = address for request is: 192.168.2.253=0A= 2015-02-21 19:01:34,482 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] = org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter: proxy = address is: 192.168.2.253=0A= 2015-02-21 19:01:34,482 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter guice=0A= 2015-02-21 19:01:34,485 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: RESPONSE = /ws/v1/mapreduce/jobs/job_1424550134651_0002 200=0A= 2015-02-21 19:01:34,485 DEBUG [970736822@qtp-700266387-0] = org.mortbay.log: EOF=0A= 2015-02-21 19:01:34,594 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #4=0A= 2015-02-21 19:01:34,595 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 5 on 35954: = statusUpdate(attempt_1424550134651_0002_m_000000_0, = org.apache.hadoop.mapred.MapTaskStatus@4a8dd32c), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58635 Call#4 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:34,595 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:34,595 INFO [IPC Server handler 5 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_m_000000_0 is : 0.0=0A= 2015-02-21 19:01:34,596 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:34,596 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:34,596 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 5 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000000_0, = org.apache.hadoop.mapred.MapTaskStatus@4a8dd32c), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58635 Call#4 Retry#0=0A= 2015-02-21 19:01:34,596 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000000_0 of type TA_UPDATE=0A= 2015-02-21 19:01:34,596 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 5 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000000_0, = org.apache.hadoop.mapred.MapTaskStatus@4a8dd32c), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58635 Call#4 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:34,596 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:34,622 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #45=0A= 2015-02-21 19:01:34,625 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #45=0A= 2015-02-21 19:01:34,625 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 3ms=0A= 2015-02-21 19:01:34,625 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received new = Container :Container: [ContainerId: = container_1424550134651_0002_01_000008, NodeId: = hadoop1.rdpratti.com:8041, NodeHttpAddress: hadoop1.rdpratti.com:8042, = Resource: , Priority: 20, Token: Token { kind: = ContainerToken, service: 192.168.2.250:8041 }, ]=0A= 2015-02-21 19:01:34,625 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received = completed container container_1424550134651_0002_01_000003=0A= 2015-02-21 19:01:34,625 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:34,625 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got = allocated containers 1=0A= 2015-02-21 19:01:34,625 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning = container container_1424550134651_0002_01_000008 with priority 20 to NM = hadoop1.rdpratti.com:8041=0A= 2015-02-21 19:01:34,625 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000004_0 of type TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:34,626 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptDiagnosticsUpdate= Event.EventType: TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:34,626 INFO [RMCommunicator Allocator] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop1.rdpratti.com = to /default=0A= 2015-02-21 19:01:34,626 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000004_0 of type TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:34,626 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3Dhadoop0.rdpratti.com numContainers=3D1 #asks=3D0=0A= 2015-02-21 19:01:34,626 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics = report from attempt_1424550134651_0002_m_000004_0: Container killed by = the ApplicationMaster.=0A= Container killed on request. Exit code is 143=0A= Container exited with a non-zero exit code 143=0A= =0A= 2015-02-21 19:01:34,626 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3Dhadoop0.rdpratti.com numContainers=3D0 #asks=3D1=0A= 2015-02-21 19:01:34,626 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3D/default numContainers=3D1 #asks=3D1=0A= 2015-02-21 19:01:34,626 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D20 = resourceName=3D/default numContainers=3D0 #asks=3D2=0A= 2015-02-21 19:01:34,626 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D20 resourceName=3D* = numContainers=3D1 #asks=3D2=0A= 2015-02-21 19:01:34,626 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D20 resourceName=3D* = numContainers=3D0 #asks=3D3=0A= 2015-02-21 19:01:34,626 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerAssigned= Event.EventType: TA_ASSIGNED=0A= 2015-02-21 19:01:34,626 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container container_1424550134651_0002_01_000008 to = attempt_1424550134651_0002_m_000002_0=0A= 2015-02-21 19:01:34,626 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000002_0 of type TA_ASSIGNED=0A= 2015-02-21 19:01:34,626 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapred.SortedRanges: currentIndex 0 0:0=0A= 2015-02-21 19:01:34,626 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container (Container: [ContainerId: = container_1424550134651_0002_01_000008, NodeId: = hadoop1.rdpratti.com:8041, NodeHttpAddress: hadoop1.rdpratti.com:8042, = Resource: , Priority: 20, Token: Token { kind: = ContainerToken, service: 192.168.2.250:8041 }, ]) to task = attempt_1424550134651_0002_m_000002_0 on node hadoop1.rdpratti.com:8041=0A= 2015-02-21 19:01:34,626 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = based on rack match /default=0A= 2015-02-21 19:01:34,626 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:34,626 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold not met. completedMapsForReduceSlowstart 4=0A= 2015-02-21 19:01:34,626 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After = Scheduling: PendingReds:4 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:3 = AssignedReds:0 CompletedMaps:2 CompletedReds:0 ContAlloc:5 ContRel:0 = HostLocal:2 RackLocal:3=0A= 2015-02-21 19:01:34,626 INFO [AsyncDispatcher event handler] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop1.rdpratti.com = to /default=0A= 2015-02-21 19:01:34,627 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000002_0 TaskAttempt Transitioned from = UNASSIGNED to ASSIGNED=0A= 2015-02-21 19:01:34,627 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:34,627 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:34,627 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerRemoteLaunchEvent.Ev= entType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000008 taskAttempt = attempt_1424550134651_0002_m_000002_0=0A= 2015-02-21 19:01:34,627 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:34,628 INFO [ContainerLauncher #6] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000008 taskAttempt = attempt_1424550134651_0002_m_000002_0=0A= 2015-02-21 19:01:34,628 INFO [ContainerLauncher #6] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Launching attempt_1424550134651_0002_m_000002_0=0A= 2015-02-21 19:01:34,628 INFO [ContainerLauncher #6] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop1.rdpratti.com:8041=0A= 2015-02-21 19:01:34,628 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.250:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@79fb73c5)=0A= 2015-02-21 19:01:34,628 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:34,628 DEBUG [ContainerLauncher #6] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:34,629 DEBUG [ContainerLauncher #6] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:34,629 DEBUG [ContainerLauncher #6] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:34,629 DEBUG [ContainerLauncher #6] = org.apache.hadoop.ipc.Client: Connecting to = hadoop1.rdpratti.com/192.168.2.250:8041=0A= 2015-02-21 19:01:34,630 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:34,630 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:34,632 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"oThbywUHH9DaSDOoSkVAJdmDHfGrDtxo38yJxRVW\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:34,632 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@4a5bb= d83=0A= 2015-02-21 19:01:34,632 INFO [ContainerLauncher #6] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.250:8041. Current token is Kind: NMToken, Service: = 192.168.2.250:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@7bec29bf)=0A= 2015-02-21 19:01:34,632 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:34,633 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:34,633 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMS5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:34,633 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:34,633 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:34,633 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMS5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"oThbywU= HH9DaSDOoSkVAJdmDHfGrDtxo38yJxRVW\",nc=3D00000001,cnonce=3D\"RT9m1jdO1EgF= xBJJNZHcuZtztsXCj2yc/v/b0YHJ\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3Dc0eb55fb86f2039a8e8e6527b33df9dd,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:34,638 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D781ba90e288f3dbd1f52840c20a63827"=0A= =0A= 2015-02-21 19:01:34,638 DEBUG [ContainerLauncher #6] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:34,642 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001 sending #46=0A= 2015-02-21 19:01:34,642 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 3=0A= 2015-02-21 19:01:34,648 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001 got value #46=0A= 2015-02-21 19:01:34,648 DEBUG [ContainerLauncher #6] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: startContainers took 19ms=0A= 2015-02-21 19:01:34,648 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:34,648 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 2=0A= 2015-02-21 19:01:34,648 INFO [ContainerLauncher #6] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Shuffle port returned by ContainerManager for = attempt_1424550134651_0002_m_000002_0 : 13562=0A= 2015-02-21 19:01:34,648 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerLaunched= Event.EventType: TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:34,648 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000002_0 of type TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:34,649 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = TaskAttempt: [attempt_1424550134651_0002_m_000002_0] using containerId: = [container_1424550134651_0002_01_000008 on NM: = [hadoop1.rdpratti.com:8041]=0A= 2015-02-21 19:01:34,649 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000002_0 TaskAttempt Transitioned from = ASSIGNED to RUNNING=0A= 2015-02-21 19:01:34,649 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:34,649 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:34,649 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = MAP_ATTEMPT_STARTED=0A= 2015-02-21 19:01:34,649 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_START=0A= 2015-02-21 19:01:34,649 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:34,649 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:34,649 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_m_000002 of type T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:34,649 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_m_000002 Task Transitioned from SCHEDULED to = RUNNING=0A= 2015-02-21 19:01:34,649 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D4, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D24064=0A= 2015-02-21 19:01:34,649 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler MAP_ATTEMPT_STARTED=0A= 2015-02-21 19:01:34,941 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #111=0A= 2015-02-21 19:01:34,941 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#111 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:34,941 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:34,942 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 1 procesingTime=3D 0=0A= 2015-02-21 19:01:34,942 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#111 Retry#0=0A= 2015-02-21 19:01:34,942 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#111 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:34,943 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #112=0A= 2015-02-21 19:01:34,943 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#112 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:34,943 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:34,943 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:34,944 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#112 Retry#0=0A= 2015-02-21 19:01:34,944 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#112 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:34,945 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #113=0A= 2015-02-21 19:01:34,945 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#113 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:34,945 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:34,945 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:34,945 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#113 Retry#0=0A= 2015-02-21 19:01:34,945 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#113 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:34,998 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #5=0A= 2015-02-21 19:01:34,999 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 35954: = ping(attempt_1424550134651_0002_m_000000_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.251:58635 = Call#5 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:34,999 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:34,999 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from = attempt_1424550134651_0002_m_000000_0=0A= 2015-02-21 19:01:34,999 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: Served: ping queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:34,999 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 35954: responding = to ping(attempt_1424550134651_0002_m_000000_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.251:58635 = Call#5 Retry#0=0A= 2015-02-21 19:01:34,999 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 35954: responding = to ping(attempt_1424550134651_0002_m_000000_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.251:58635 = Call#5 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:35,472 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #4=0A= 2015-02-21 19:01:35,473 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: = statusUpdate(attempt_1424550134651_0002_m_000001_0, = org.apache.hadoop.mapred.MapTaskStatus@17c48cbe), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43600 Call#4 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:35,473 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:35,473 INFO [IPC Server handler 2 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_m_000001_0 is : 0.0=0A= 2015-02-21 19:01:35,474 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:35,474 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:35,474 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000001_0, = org.apache.hadoop.mapred.MapTaskStatus@17c48cbe), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43600 Call#4 Retry#0=0A= 2015-02-21 19:01:35,474 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000001_0 of type TA_UPDATE=0A= 2015-02-21 19:01:35,474 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000001_0, = org.apache.hadoop.mapred.MapTaskStatus@17c48cbe), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43600 Call#4 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:35,474 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:35,628 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #47=0A= 2015-02-21 19:01:35,630 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #47=0A= 2015-02-21 19:01:35,630 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:35,630 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = getResources() for application_1424550134651_0002: ask=3D3 release=3D 0 = newContainers=3D0 finishedContainers=3D0 resourcelimit=3D knownNMs=3D4=0A= 2015-02-21 19:01:35,947 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #114=0A= 2015-02-21 19:01:35,948 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#114 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:35,948 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:35,948 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:35,949 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#114 Retry#0=0A= 2015-02-21 19:01:35,949 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#114 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:35,950 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #115=0A= 2015-02-21 19:01:35,950 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#115 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:35,950 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:35,950 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:35,951 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#115 Retry#0=0A= 2015-02-21 19:01:35,951 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#115 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:35,952 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #116=0A= 2015-02-21 19:01:35,952 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#116 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:35,952 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:35,952 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:35,952 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#116 Retry#0=0A= 2015-02-21 19:01:35,953 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#116 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:36,021 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #7=0A= 2015-02-21 19:01:36,021 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 35954: = statusUpdate(attempt_1424550134651_0002_m_000000_0, = org.apache.hadoop.mapred.MapTaskStatus@5b9b10cd), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58635 Call#7 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:36,021 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:36,022 INFO [IPC Server handler 11 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_m_000000_0 is : 1.0=0A= 2015-02-21 19:01:36,024 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 0 = procesingTime=3D 3=0A= 2015-02-21 19:01:36,024 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:36,024 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000000_0, = org.apache.hadoop.mapred.MapTaskStatus@5b9b10cd), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58635 Call#7 Retry#0=0A= 2015-02-21 19:01:36,024 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000000_0 of type TA_UPDATE=0A= 2015-02-21 19:01:36,024 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000000_0, = org.apache.hadoop.mapred.MapTaskStatus@5b9b10cd), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58635 Call#7 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:36,024 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:36,026 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #8=0A= 2015-02-21 19:01:36,026 DEBUG [IPC Server handler 26 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 26 on 35954: = done(attempt_1424550134651_0002_m_000000_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.251:58635 = Call#8 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:36,026 DEBUG [IPC Server handler 26 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:36,026 INFO [IPC Server handler 26 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement = from attempt_1424550134651_0002_m_000000_0=0A= 2015-02-21 19:01:36,026 DEBUG [IPC Server handler 26 on 35954] = org.apache.hadoop.ipc.Server: Served: done queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:36,026 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_DONE=0A= 2015-02-21 19:01:36,026 DEBUG [IPC Server handler 26 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 26 on 35954: responding = to done(attempt_1424550134651_0002_m_000000_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.251:58635 = Call#8 Retry#0=0A= 2015-02-21 19:01:36,026 DEBUG [IPC Server handler 26 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 26 on 35954: responding = to done(attempt_1424550134651_0002_m_000000_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.251:58635 = Call#8 Retry#0 Wrote 118 bytes.=0A= 2015-02-21 19:01:36,027 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000000_0 of type TA_DONE=0A= 2015-02-21 19:01:36,027 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000000_0 TaskAttempt Transitioned from = RUNNING to SUCCESS_CONTAINER_CLEANUP=0A= 2015-02-21 19:01:36,027 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherEvent.EventT= ype: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000004 taskAttempt = attempt_1424550134651_0002_m_000000_0=0A= 2015-02-21 19:01:36,027 INFO [ContainerLauncher #7] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000004 taskAttempt = attempt_1424550134651_0002_m_000000_0=0A= 2015-02-21 19:01:36,027 INFO [ContainerLauncher #7] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = KILLING attempt_1424550134651_0002_m_000000_0=0A= 2015-02-21 19:01:36,028 INFO [ContainerLauncher #7] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop2.rdpratti.com:8041=0A= 2015-02-21 19:01:36,028 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.251:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@1f6b6954)=0A= 2015-02-21 19:01:36,029 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:36,029 DEBUG [ContainerLauncher #7] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:36,029 DEBUG [ContainerLauncher #7] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:36,029 DEBUG [ContainerLauncher #7] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:36,029 DEBUG [ContainerLauncher #7] = org.apache.hadoop.ipc.Client: Connecting to = hadoop2.rdpratti.com/192.168.2.251:8041=0A= 2015-02-21 19:01:36,030 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:36,030 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:36,031 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"VUduxDUnyVLPrl0R3odmqrCjcDXFNda72FswFZBL\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:36,031 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@46a7c= cc6=0A= 2015-02-21 19:01:36,032 INFO [ContainerLauncher #7] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.251:8041. Current token is Kind: NMToken, Service: = 192.168.2.251:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@78bed8ba)=0A= 2015-02-21 19:01:36,032 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:36,032 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:36,033 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMi5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:36,033 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:36,033 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:36,033 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMi5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"VUduxDU= nyVLPrl0R3odmqrCjcDXFNda72FswFZBL\",nc=3D00000001,cnonce=3D\"59hQ+ebzMPku= I7L6oHOA1fV2GIFWIxiN5D+wfwjS\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3D08373b6702b995558f375b07a4ea748b,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:36,036 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D918692765dbc1a8b2f4735d92f593ca2"=0A= =0A= 2015-02-21 19:01:36,036 DEBUG [ContainerLauncher #7] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:36,036 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 3=0A= 2015-02-21 19:01:36,037 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001 sending #48=0A= 2015-02-21 19:01:36,049 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = disconnecting client 192.168.2.251:58635. Number of active connections: 1=0A= 2015-02-21 19:01:36,069 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001 got value #48=0A= 2015-02-21 19:01:36,069 DEBUG [ContainerLauncher #7] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: stopContainers took 40ms=0A= 2015-02-21 19:01:36,069 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:36,069 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 2=0A= 2015-02-21 19:01:36,069 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:36,070 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000000_0 of type TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:36,070 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000000_0 TaskAttempt Transitioned from = SUCCESS_CONTAINER_CLEANUP to SUCCEEDED=0A= 2015-02-21 19:01:36,070 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:36,070 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:36,070 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = MAP_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:36,070 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:36,070 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:36,070 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_m_000000 of type T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:36,070 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded = with attempt attempt_1424550134651_0002_m_000000_0=0A= 2015-02-21 19:01:36,070 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_m_000000 Task Transitioned from RUNNING to = SUCCEEDED=0A= 2015-02-21 19:01:36,070 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:36,070 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskAttemptCompletedEvent= .EventType: JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:36,070 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:36,071 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskEvent.EventType: = JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:36,071 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:36,071 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed = Tasks: 3=0A= 2015-02-21 19:01:36,071 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_FINISHED=0A= 2015-02-21 19:01:36,073 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:36,073 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 28072 lastFlushOffset 24458=0A= 2015-02-21 19:01:36,073 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 4=0A= 2015-02-21 19:01:36,073 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 4=0A= 2015-02-21 19:01:36,073 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:4 offsetInBlock:24064 lastPacketInBlock:false = lastByteOffsetInBlock: 28072=0A= 2015-02-21 19:01:36,076 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 4 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 2341664=0A= 2015-02-21 19:01:36,076 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler MAP_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:36,076 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:36,078 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D5, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D27648=0A= 2015-02-21 19:01:36,078 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:36,078 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 30812 lastFlushOffset 28072=0A= 2015-02-21 19:01:36,078 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 5=0A= 2015-02-21 19:01:36,078 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 5=0A= 2015-02-21 19:01:36,078 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:5 offsetInBlock:27648 lastPacketInBlock:false = lastByteOffsetInBlock: 30812=0A= 2015-02-21 19:01:36,080 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 5 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1455529=0A= 2015-02-21 19:01:36,080 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_FINISHED=0A= 2015-02-21 19:01:36,153 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #6=0A= 2015-02-21 19:01:36,154 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: = statusUpdate(attempt_1424550134651_0002_m_000001_0, = org.apache.hadoop.mapred.MapTaskStatus@5f961d6b), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43600 Call#6 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:36,154 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:36,154 INFO [IPC Server handler 4 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_m_000001_0 is : 1.0=0A= 2015-02-21 19:01:36,156 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 0 = procesingTime=3D 2=0A= 2015-02-21 19:01:36,156 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:36,156 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000001_0 of type TA_UPDATE=0A= 2015-02-21 19:01:36,156 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000001_0, = org.apache.hadoop.mapred.MapTaskStatus@5f961d6b), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43600 Call#6 Retry#0=0A= 2015-02-21 19:01:36,157 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:36,157 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000001_0, = org.apache.hadoop.mapred.MapTaskStatus@5f961d6b), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43600 Call#6 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:36,157 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #7=0A= 2015-02-21 19:01:36,158 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 35954: = done(attempt_1424550134651_0002_m_000001_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43600 = Call#7 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:36,158 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:36,158 INFO [IPC Server handler 29 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement = from attempt_1424550134651_0002_m_000001_0=0A= 2015-02-21 19:01:36,158 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: Served: done queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:36,158 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 35954: responding = to done(attempt_1424550134651_0002_m_000001_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43600 = Call#7 Retry#0=0A= 2015-02-21 19:01:36,158 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 35954: responding = to done(attempt_1424550134651_0002_m_000001_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43600 = Call#7 Retry#0 Wrote 118 bytes.=0A= 2015-02-21 19:01:36,159 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = disconnecting client 192.168.2.252:43600. Number of active connections: 0=0A= 2015-02-21 19:01:36,160 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_DONE=0A= 2015-02-21 19:01:36,160 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000001_0 of type TA_DONE=0A= 2015-02-21 19:01:36,160 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000001_0 TaskAttempt Transitioned from = RUNNING to SUCCESS_CONTAINER_CLEANUP=0A= 2015-02-21 19:01:36,160 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherEvent.EventT= ype: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000005 taskAttempt = attempt_1424550134651_0002_m_000001_0=0A= 2015-02-21 19:01:36,161 INFO [ContainerLauncher #8] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000005 taskAttempt = attempt_1424550134651_0002_m_000001_0=0A= 2015-02-21 19:01:36,161 INFO [ContainerLauncher #8] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = KILLING attempt_1424550134651_0002_m_000001_0=0A= 2015-02-21 19:01:36,161 INFO [ContainerLauncher #8] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:36,162 DEBUG [ContainerLauncher #8] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@5b62cd3d)=0A= 2015-02-21 19:01:36,162 DEBUG [ContainerLauncher #8] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:36,162 DEBUG [ContainerLauncher #8] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:36,162 DEBUG [ContainerLauncher #8] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:36,162 DEBUG [ContainerLauncher #8] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:36,162 DEBUG [ContainerLauncher #8] = org.apache.hadoop.ipc.Client: Connecting to = hadoop3.rdpratti.com/192.168.2.252:8041=0A= 2015-02-21 19:01:36,163 DEBUG [ContainerLauncher #8] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:36,163 DEBUG [ContainerLauncher #8] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:36,164 DEBUG [ContainerLauncher #8] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"vMMRWPufRwYrMUhIqxwXvJe9TnNtjo6A/6a2eG8i\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:36,164 DEBUG [ContainerLauncher #8] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@363fc= fd9=0A= 2015-02-21 19:01:36,164 INFO [ContainerLauncher #8] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.252:8041. Current token is Kind: NMToken, Service: = 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@1766cb62)=0A= 2015-02-21 19:01:36,164 DEBUG [ContainerLauncher #8] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:36,165 DEBUG [ContainerLauncher #8] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:36,165 DEBUG [ContainerLauncher #8] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:36,165 DEBUG [ContainerLauncher #8] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:36,165 DEBUG [ContainerLauncher #8] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:36,166 DEBUG [ContainerLauncher #8] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"vMMRWPu= fRwYrMUhIqxwXvJe9TnNtjo6A/6a2eG8i\",nc=3D00000001,cnonce=3D\"79SoRlbl09jN= uEMt7PsWONmSaze9H1qaAZTBxFr9\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3D7a98412ec351cfb3b2b739772a884a78,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:36,167 DEBUG [ContainerLauncher #8] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D3727e8c10efa36093b6303299a503d94"=0A= =0A= 2015-02-21 19:01:36,168 DEBUG [ContainerLauncher #8] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:36,168 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 3=0A= 2015-02-21 19:01:36,171 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 sending #49=0A= 2015-02-21 19:01:36,174 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 got value #49=0A= 2015-02-21 19:01:36,174 DEBUG [ContainerLauncher #8] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: stopContainers took 12ms=0A= 2015-02-21 19:01:36,174 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:36,174 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 2=0A= 2015-02-21 19:01:36,174 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:36,174 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000001_0 of type TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:36,174 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000001_0 TaskAttempt Transitioned from = SUCCESS_CONTAINER_CLEANUP to SUCCEEDED=0A= 2015-02-21 19:01:36,175 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:36,175 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:36,175 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = MAP_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:36,175 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:36,175 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:36,175 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_m_000001 of type T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:36,175 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded = with attempt attempt_1424550134651_0002_m_000001_0=0A= 2015-02-21 19:01:36,175 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_m_000001 Task Transitioned from RUNNING to = SUCCEEDED=0A= 2015-02-21 19:01:36,175 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:36,175 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskAttemptCompletedEvent= .EventType: JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:36,175 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:36,175 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskEvent.EventType: = JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:36,175 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:36,175 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed = Tasks: 4=0A= 2015-02-21 19:01:36,175 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_FINISHED=0A= 2015-02-21 19:01:36,177 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D6, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D30720=0A= 2015-02-21 19:01:36,177 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:36,177 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 33976 lastFlushOffset 30812=0A= 2015-02-21 19:01:36,177 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 6=0A= 2015-02-21 19:01:36,177 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 6=0A= 2015-02-21 19:01:36,177 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:6 offsetInBlock:30720 lastPacketInBlock:false = lastByteOffsetInBlock: 33976=0A= 2015-02-21 19:01:36,180 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 6 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1881536=0A= 2015-02-21 19:01:36,180 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler MAP_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:36,180 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:36,182 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D7, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D33792=0A= 2015-02-21 19:01:36,182 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:36,182 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 36716 lastFlushOffset 33976=0A= 2015-02-21 19:01:36,182 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 7=0A= 2015-02-21 19:01:36,182 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 7=0A= 2015-02-21 19:01:36,183 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:7 offsetInBlock:33792 lastPacketInBlock:false = lastByteOffsetInBlock: 36716=0A= 2015-02-21 19:01:36,185 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 7 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1722825=0A= 2015-02-21 19:01:36,185 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_FINISHED=0A= 2015-02-21 19:01:36,631 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before = Scheduling: PendingReds:4 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:3 = AssignedReds:0 CompletedMaps:4 CompletedReds:0 ContAlloc:5 ContRel:0 = HostLocal:2 RackLocal:3=0A= 2015-02-21 19:01:36,631 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #50=0A= 2015-02-21 19:01:36,633 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #50=0A= 2015-02-21 19:01:36,633 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:36,633 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = Recalculating schedule, headroom=3D=0A= 2015-02-21 19:01:36,633 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow = start threshold reached. Scheduling reduces.=0A= 2015-02-21 19:01:36,633 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: All maps = assigned. Ramping up all remaining reduces:4=0A= 2015-02-21 19:01:36,633 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Added = priority=3D10=0A= 2015-02-21 19:01:36,633 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D10 resourceName=3D* = numContainers=3D1 #asks=3D1=0A= 2015-02-21 19:01:36,633 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D10 resourceName=3D* = numContainers=3D2 #asks=3D1=0A= 2015-02-21 19:01:36,633 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D10 resourceName=3D* = numContainers=3D3 #asks=3D1=0A= 2015-02-21 19:01:36,633 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = addResourceRequest: applicationId=3D2 priority=3D10 resourceName=3D* = numContainers=3D4 #asks=3D1=0A= 2015-02-21 19:01:36,633 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After = Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:4 AssignedMaps:3 = AssignedReds:0 CompletedMaps:4 CompletedReds:0 ContAlloc:5 ContRel:0 = HostLocal:2 RackLocal:3=0A= 2015-02-21 19:01:36,954 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #117=0A= 2015-02-21 19:01:36,954 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#117 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:36,954 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:36,955 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 1=0A= 2015-02-21 19:01:36,955 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#117 Retry#0=0A= 2015-02-21 19:01:36,955 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#117 Retry#0 Wrote 171 bytes.=0A= 2015-02-21 19:01:36,956 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #118=0A= 2015-02-21 19:01:36,956 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#118 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:36,956 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:36,957 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:36,957 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#118 Retry#0=0A= 2015-02-21 19:01:36,957 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#118 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:36,958 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #119=0A= 2015-02-21 19:01:36,958 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#119 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:36,958 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:36,958 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:36,958 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#119 Retry#0=0A= 2015-02-21 19:01:36,959 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#119 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:37,634 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #51=0A= 2015-02-21 19:01:37,636 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #51=0A= 2015-02-21 19:01:37,636 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:37,636 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = getResources() for application_1424550134651_0002: ask=3D1 release=3D 0 = newContainers=3D0 finishedContainers=3D2 resourcelimit=3D knownNMs=3D4=0A= 2015-02-21 19:01:37,636 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = headroom=3D=0A= 2015-02-21 19:01:37,636 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received = completed container container_1424550134651_0002_01_000004=0A= 2015-02-21 19:01:37,636 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:37,636 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received = completed container container_1424550134651_0002_01_000005=0A= 2015-02-21 19:01:37,636 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000000_0 of type TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:37,636 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After = Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:4 AssignedMaps:1 = AssignedReds:0 CompletedMaps:4 CompletedReds:0 ContAlloc:5 ContRel:0 = HostLocal:2 RackLocal:3=0A= 2015-02-21 19:01:37,636 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptDiagnosticsUpdate= Event.EventType: TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:37,636 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000000_0 of type TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:37,636 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics = report from attempt_1424550134651_0002_m_000000_0: Container killed by = the ApplicationMaster.=0A= Container killed on request. Exit code is 143=0A= Container exited with a non-zero exit code 143=0A= =0A= 2015-02-21 19:01:37,636 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:37,636 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000001_0 of type TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:37,637 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptDiagnosticsUpdate= Event.EventType: TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:37,637 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000001_0 of type TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:37,637 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics = report from attempt_1424550134651_0002_m_000001_0: Container killed by = the ApplicationMaster.=0A= Container killed on request. Exit code is 143=0A= Container exited with a non-zero exit code 143=0A= =0A= 2015-02-21 19:01:37,770 DEBUG [IPC Server listener on 35954] = org.apache.hadoop.ipc.Server: Server connection from = 192.168.2.250:34286; # active connections: 1; # queued calls: 0=0A= 2015-02-21 19:01:37,960 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #120=0A= 2015-02-21 19:01:37,960 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#120 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:37,960 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:37,961 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 1=0A= 2015-02-21 19:01:37,961 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#120 Retry#0=0A= 2015-02-21 19:01:37,961 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#120 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:37,961 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #121=0A= 2015-02-21 19:01:37,962 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#121 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:37,962 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:37,962 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:37,963 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#121 Retry#0=0A= 2015-02-21 19:01:37,963 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#121 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:37,964 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #122=0A= 2015-02-21 19:01:37,965 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#122 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:37,965 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:37,965 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:37,965 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#122 Retry#0=0A= 2015-02-21 19:01:37,965 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#122 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:37,979 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:37,979 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: Created SASL server with = mechanism =3D DIGEST-MD5=0A= 2015-02-21 19:01:37,980 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"8OomvRHhOgUfjLID+UZKNz63yZGvGWyji59d2HPi\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 19:01:37,980 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.250:34286 Call#-33 Retry#-1=0A= 2015-02-21 19:01:37,980 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.250:34286 Call#-33 Retry#-1 Wrote 178 = bytes.=0A= 2015-02-21 19:01:38,190 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:38,191 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Have read input token of size 270 for = processing by saslServer.evaluateResponse()=0A= 2015-02-21 19:01:38,191 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting password for client: job_1424550134651_0002 = (auth:SIMPLE)=0A= 2015-02-21 19:01:38,191 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting canonicalized client ID: job_1424550134651_0002=0A= 2015-02-21 19:01:38,192 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Will send SUCCESS token of size 40 from = saslServer.=0A= 2015-02-21 19:01:38,192 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server context established. = Negotiated QoP is auth=0A= 2015-02-21 19:01:38,192 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server successfully authenticated = client: job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:38,192 INFO [Socket Reader #1 for port 35954] = SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for = job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:38,192 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: SUCCESS=0A= token: "rspauth=3Dac8787440d8681c29b46a904d7495b66"=0A= =0A= 2015-02-21 19:01:38,192 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.250:34286 Call#-33 Retry#-1=0A= 2015-02-21 19:01:38,192 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.250:34286 Call#-33 Retry#-1 Wrote 64 = bytes.=0A= 2015-02-21 19:01:38,196 DEBUG [IPC Server idle connection scanner for = port 59910] org.apache.hadoop.ipc.Server: IPC Server idle connection = scanner for port 59910: task running=0A= 2015-02-21 19:01:38,226 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-3=0A= 2015-02-21 19:01:38,226 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Successfully authorized userInfo {=0A= }=0A= protocol: "org.apache.hadoop.mapred.TaskUmbilicalProtocol"=0A= =0A= 2015-02-21 19:01:38,226 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #0=0A= 2015-02-21 19:01:38,227 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 35954: = getTask(org.apache.hadoop.mapred.JvmContext@13100598), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34286 Call#0 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:38,227 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:38,227 INFO [IPC Server handler 28 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : = jvm_1424550134651_0002_m_000008 asked for a task=0A= 2015-02-21 19:01:38,227 INFO [IPC Server handler 28 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: = jvm_1424550134651_0002_m_000008 given task: = attempt_1424550134651_0002_m_000002_0=0A= 2015-02-21 19:01:38,227 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: Served: getTask queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:38,228 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@13100598), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34286 Call#0 Retry#0=0A= 2015-02-21 19:01:38,228 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@13100598), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34286 Call#0 Retry#0 Wrote 379 bytes.=0A= 2015-02-21 19:01:38,637 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #52=0A= 2015-02-21 19:01:38,638 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #52=0A= 2015-02-21 19:01:38,638 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 1ms=0A= 2015-02-21 19:01:38,801 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera: closed=0A= 2015-02-21 19:01:38,801 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera: stopped, = remaining connections 1=0A= 2015-02-21 19:01:38,967 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #123=0A= 2015-02-21 19:01:38,967 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#123 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:38,967 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:38,967 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:38,967 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#123 Retry#0=0A= 2015-02-21 19:01:38,968 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#123 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:38,968 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #124=0A= 2015-02-21 19:01:38,968 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#124 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:38,968 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:38,969 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:38,969 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#124 Retry#0=0A= 2015-02-21 19:01:38,969 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#124 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:38,970 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #125=0A= 2015-02-21 19:01:38,970 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#125 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:38,970 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:38,971 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:38,971 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#125 Retry#0=0A= 2015-02-21 19:01:38,971 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#125 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:39,177 DEBUG [IPC Server idle connection scanner for = port 35954] org.apache.hadoop.ipc.Server: IPC Server idle connection = scanner for port 35954: task running=0A= 2015-02-21 19:01:39,639 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #53=0A= 2015-02-21 19:01:39,643 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #53=0A= 2015-02-21 19:01:39,643 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 4ms=0A= 2015-02-21 19:01:39,972 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #126=0A= 2015-02-21 19:01:39,972 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#126 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:39,972 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:39,973 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 1 procesingTime=3D 0=0A= 2015-02-21 19:01:39,973 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#126 Retry#0=0A= 2015-02-21 19:01:39,973 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#126 Retry#0 Wrote 32 bytes.=0A= 2015-02-21 19:01:39,974 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #127=0A= 2015-02-21 19:01:39,974 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#127 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:39,974 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:39,974 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:39,975 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#127 Retry#0=0A= 2015-02-21 19:01:39,975 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#127 Retry#0 Wrote 280 bytes.=0A= 2015-02-21 19:01:39,976 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #128=0A= 2015-02-21 19:01:39,976 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#128 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:39,976 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:39,976 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:39,976 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#128 Retry#0=0A= 2015-02-21 19:01:39,976 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#128 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:40,644 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #54=0A= 2015-02-21 19:01:40,646 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #54=0A= 2015-02-21 19:01:40,646 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:40,978 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #129=0A= 2015-02-21 19:01:40,978 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#129 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:40,978 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:40,978 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:40,978 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#129 Retry#0=0A= 2015-02-21 19:01:40,978 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#129 Retry#0 Wrote 33 bytes.=0A= 2015-02-21 19:01:40,979 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #130=0A= 2015-02-21 19:01:40,979 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#130 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:40,979 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:40,980 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:40,980 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#130 Retry#0=0A= 2015-02-21 19:01:40,980 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#130 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:40,981 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #131=0A= 2015-02-21 19:01:40,981 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#131 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:40,981 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:40,982 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:40,982 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#131 Retry#0=0A= 2015-02-21 19:01:40,982 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#131 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:41,647 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #55=0A= 2015-02-21 19:01:41,651 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #55=0A= 2015-02-21 19:01:41,651 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 4ms=0A= 2015-02-21 19:01:41,651 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = headroom=3D=0A= 2015-02-21 19:01:41,651 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received new = Container :Container: [ContainerId: = container_1424550134651_0002_01_000009, NodeId: = hadoop2.rdpratti.com:8041, NodeHttpAddress: hadoop2.rdpratti.com:8042, = Resource: , Priority: 10, Token: Token { kind: = ContainerToken, service: 192.168.2.251:8041 }, ]=0A= 2015-02-21 19:01:41,651 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received new = Container :Container: [ContainerId: = container_1424550134651_0002_01_000010, NodeId: = hadoop3.rdpratti.com:8041, NodeHttpAddress: hadoop3.rdpratti.com:8042, = Resource: , Priority: 10, Token: Token { kind: = ContainerToken, service: 192.168.2.252:8041 }, ]=0A= 2015-02-21 19:01:41,651 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got = allocated containers 2=0A= 2015-02-21 19:01:41,651 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning = container container_1424550134651_0002_01_000009 with priority 10 to NM = hadoop2.rdpratti.com:8041=0A= 2015-02-21 19:01:41,651 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning = container container_1424550134651_0002_01_000010 with priority 10 to NM = hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:41,651 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning = container Container: [ContainerId: = container_1424550134651_0002_01_000009, NodeId: = hadoop2.rdpratti.com:8041, NodeHttpAddress: hadoop2.rdpratti.com:8042, = Resource: , Priority: 10, Token: Token { kind: = ContainerToken, service: 192.168.2.251:8041 }, ] to reduce=0A= 2015-02-21 19:01:41,652 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned to = reduce=0A= 2015-02-21 19:01:41,652 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D10 resourceName=3D* = numContainers=3D4 #asks=3D0=0A= 2015-02-21 19:01:41,652 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D10 resourceName=3D* = numContainers=3D3 #asks=3D1=0A= 2015-02-21 19:01:41,652 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerAssigned= Event.EventType: TA_ASSIGNED=0A= 2015-02-21 19:01:41,652 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container container_1424550134651_0002_01_000009 to = attempt_1424550134651_0002_r_000000_0=0A= 2015-02-21 19:01:41,652 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000000_0 of type TA_ASSIGNED=0A= 2015-02-21 19:01:41,652 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container (Container: [ContainerId: = container_1424550134651_0002_01_000009, NodeId: = hadoop2.rdpratti.com:8041, NodeHttpAddress: hadoop2.rdpratti.com:8042, = Resource: , Priority: 10, Token: Token { kind: = ContainerToken, service: 192.168.2.251:8041 }, ]) to task = attempt_1424550134651_0002_r_000000_0 on node hadoop2.rdpratti.com:8041=0A= 2015-02-21 19:01:41,652 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning = container Container: [ContainerId: = container_1424550134651_0002_01_000010, NodeId: = hadoop3.rdpratti.com:8041, NodeHttpAddress: hadoop3.rdpratti.com:8042, = Resource: , Priority: 10, Token: Token { kind: = ContainerToken, service: 192.168.2.252:8041 }, ] to reduce=0A= 2015-02-21 19:01:41,652 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned to = reduce=0A= 2015-02-21 19:01:41,652 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D10 resourceName=3D* = numContainers=3D3 #asks=3D1=0A= 2015-02-21 19:01:41,652 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D10 resourceName=3D* = numContainers=3D2 #asks=3D1=0A= 2015-02-21 19:01:41,652 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container container_1424550134651_0002_01_000010 to = attempt_1424550134651_0002_r_000001_0=0A= 2015-02-21 19:01:41,652 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container (Container: [ContainerId: = container_1424550134651_0002_01_000010, NodeId: = hadoop3.rdpratti.com:8041, NodeHttpAddress: hadoop3.rdpratti.com:8042, = Resource: , Priority: 10, Token: Token { kind: = ContainerToken, service: 192.168.2.252:8041 }, ]) to task = attempt_1424550134651_0002_r_000001_0 on node hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:41,652 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After = Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:2 AssignedMaps:1 = AssignedReds:2 CompletedMaps:4 CompletedReds:0 ContAlloc:7 ContRel:0 = HostLocal:2 RackLocal:3=0A= 2015-02-21 19:01:41,656 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapred.SortedRanges: currentIndex 0 0:0=0A= 2015-02-21 19:01:41,658 INFO [AsyncDispatcher event handler] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop2.rdpratti.com = to /default=0A= 2015-02-21 19:01:41,659 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000000_0 TaskAttempt Transitioned from = UNASSIGNED to ASSIGNED=0A= 2015-02-21 19:01:41,659 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerAssigned= Event.EventType: TA_ASSIGNED=0A= 2015-02-21 19:01:41,659 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000001_0 of type TA_ASSIGNED=0A= 2015-02-21 19:01:41,659 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapred.SortedRanges: currentIndex 0 0:0=0A= 2015-02-21 19:01:41,659 INFO [AsyncDispatcher event handler] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop3.rdpratti.com = to /default=0A= 2015-02-21 19:01:41,660 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000001_0 TaskAttempt Transitioned from = UNASSIGNED to ASSIGNED=0A= 2015-02-21 19:01:41,660 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerRemoteLaunchEvent.Ev= entType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000009 taskAttempt = attempt_1424550134651_0002_r_000000_0=0A= 2015-02-21 19:01:41,660 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:41,660 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerRemoteLaunchEvent.Ev= entType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000010 taskAttempt = attempt_1424550134651_0002_r_000001_0=0A= 2015-02-21 19:01:41,660 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:41,660 INFO [ContainerLauncher #0] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000010 taskAttempt = attempt_1424550134651_0002_r_000001_0=0A= 2015-02-21 19:01:41,660 INFO [ContainerLauncher #0] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Launching attempt_1424550134651_0002_r_000001_0=0A= 2015-02-21 19:01:41,660 INFO [ContainerLauncher #0] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:41,660 INFO [ContainerLauncher #9] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000009 taskAttempt = attempt_1424550134651_0002_r_000000_0=0A= 2015-02-21 19:01:41,661 INFO [ContainerLauncher #9] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Launching attempt_1424550134651_0002_r_000000_0=0A= 2015-02-21 19:01:41,661 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@2e97d63b)=0A= 2015-02-21 19:01:41,661 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:41,661 DEBUG [ContainerLauncher #0] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:41,661 DEBUG [ContainerLauncher #0] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:41,661 INFO [ContainerLauncher #9] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop2.rdpratti.com:8041=0A= 2015-02-21 19:01:41,665 DEBUG [ContainerLauncher #0] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:41,665 DEBUG [ContainerLauncher #0] = org.apache.hadoop.ipc.Client: Connecting to = hadoop3.rdpratti.com/192.168.2.252:8041=0A= 2015-02-21 19:01:41,665 DEBUG [ContainerLauncher #9] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.251:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@795cb9b1)=0A= 2015-02-21 19:01:41,666 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:41,666 DEBUG [ContainerLauncher #9] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:41,666 DEBUG [ContainerLauncher #9] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:41,666 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:41,666 DEBUG [ContainerLauncher #9] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:41,667 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"rYvQGmCwvy8YRLPENActvHfx+O9E0Blj0zAi0Uuw\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:41,667 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@25970= 4c2=0A= 2015-02-21 19:01:41,667 INFO [ContainerLauncher #0] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.252:8041. Current token is Kind: NMToken, Service: = 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@5db96fd5)=0A= 2015-02-21 19:01:41,667 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:41,667 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:41,668 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:41,668 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:41,668 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:41,668 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"rYvQGmC= wvy8YRLPENActvHfx+O9E0Blj0zAi0Uuw\",nc=3D00000001,cnonce=3D\"vtnzL4Ax1GZk= s1Xop0tYbgaOlgOYIzPmq8l/9yru\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3D528553591b6512a091ac3078f1fda9c9,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:41,668 DEBUG [ContainerLauncher #9] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:41,669 DEBUG [ContainerLauncher #9] = org.apache.hadoop.ipc.Client: Connecting to = hadoop2.rdpratti.com/192.168.2.251:8041=0A= 2015-02-21 19:01:41,669 DEBUG [ContainerLauncher #9] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:41,669 DEBUG [ContainerLauncher #9] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:41,670 DEBUG [ContainerLauncher #0] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D0b0793baf18f4b0c8534396aee66ed24"=0A= =0A= 2015-02-21 19:01:41,670 DEBUG [ContainerLauncher #0] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:41,670 DEBUG [ContainerLauncher #9] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"MJT7r24AuCjgZ5i/0NQLXxKjpPf7w+cZVLc7xJZV\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:41,670 DEBUG [ContainerLauncher #9] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@5a14b= e8c=0A= 2015-02-21 19:01:41,672 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 sending #56=0A= 2015-02-21 19:01:41,672 INFO [ContainerLauncher #9] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.251:8041. Current token is Kind: NMToken, Service: = 192.168.2.251:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@45c066e2)=0A= 2015-02-21 19:01:41,672 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 3=0A= 2015-02-21 19:01:41,672 DEBUG [ContainerLauncher #9] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:41,673 DEBUG [ContainerLauncher #9] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:41,673 DEBUG [ContainerLauncher #9] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMi5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:41,673 DEBUG [ContainerLauncher #9] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:41,673 DEBUG [ContainerLauncher #9] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:41,674 DEBUG [ContainerLauncher #9] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMi5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"MJT7r24= AuCjgZ5i/0NQLXxKjpPf7w+cZVLc7xJZV\",nc=3D00000001,cnonce=3D\"30jrWFp7XeEW= ipNc7X8ljS0ZhQ4FM0idUyof3BzK\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3D0be5830662e2980745f51241334b065d,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:41,675 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 got value #56=0A= 2015-02-21 19:01:41,675 DEBUG [ContainerLauncher #0] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: startContainers took 10ms=0A= 2015-02-21 19:01:41,675 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:41,675 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 2=0A= 2015-02-21 19:01:41,676 INFO [ContainerLauncher #0] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Shuffle port returned by ContainerManager for = attempt_1424550134651_0002_r_000001_0 : 13562=0A= 2015-02-21 19:01:41,676 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerLaunched= Event.EventType: TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:41,676 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000001_0 of type TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:41,676 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = TaskAttempt: [attempt_1424550134651_0002_r_000001_0] using containerId: = [container_1424550134651_0002_01_000010 on NM: = [hadoop3.rdpratti.com:8041]=0A= 2015-02-21 19:01:41,676 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000001_0 TaskAttempt Transitioned from = ASSIGNED to RUNNING=0A= 2015-02-21 19:01:41,676 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:41,676 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:41,676 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = REDUCE_ATTEMPT_STARTED=0A= 2015-02-21 19:01:41,677 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_START=0A= 2015-02-21 19:01:41,677 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:41,677 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:41,677 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000001 of type T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:41,677 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_r_000001 Task Transitioned from SCHEDULED to = RUNNING=0A= 2015-02-21 19:01:41,677 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D8, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D36352=0A= 2015-02-21 19:01:41,677 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler REDUCE_ATTEMPT_STARTED=0A= 2015-02-21 19:01:41,677 DEBUG [ContainerLauncher #9] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D0de76364737051ebd6d24a2f55643a08"=0A= =0A= 2015-02-21 19:01:41,677 DEBUG [ContainerLauncher #9] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:41,678 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 2=0A= 2015-02-21 19:01:41,678 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001 sending #57=0A= 2015-02-21 19:01:41,696 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001 got value #57=0A= 2015-02-21 19:01:41,696 DEBUG [ContainerLauncher #9] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: startContainers took 28ms=0A= 2015-02-21 19:01:41,696 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:41,696 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 1=0A= 2015-02-21 19:01:41,696 INFO [ContainerLauncher #9] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Shuffle port returned by ContainerManager for = attempt_1424550134651_0002_r_000000_0 : 13562=0A= 2015-02-21 19:01:41,696 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerLaunched= Event.EventType: TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:41,696 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000000_0 of type TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:41,696 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = TaskAttempt: [attempt_1424550134651_0002_r_000000_0] using containerId: = [container_1424550134651_0002_01_000009 on NM: = [hadoop2.rdpratti.com:8041]=0A= 2015-02-21 19:01:41,696 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000000_0 TaskAttempt Transitioned from = ASSIGNED to RUNNING=0A= 2015-02-21 19:01:41,696 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:41,697 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:41,697 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = REDUCE_ATTEMPT_STARTED=0A= 2015-02-21 19:01:41,697 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_START=0A= 2015-02-21 19:01:41,697 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:41,697 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:41,697 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000000 of type T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:41,697 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_r_000000 Task Transitioned from SCHEDULED to = RUNNING=0A= 2015-02-21 19:01:41,697 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler REDUCE_ATTEMPT_STARTED=0A= 2015-02-21 19:01:41,985 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #132=0A= 2015-02-21 19:01:41,985 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#132 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:41,985 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:41,986 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 1=0A= 2015-02-21 19:01:41,986 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#132 Retry#0=0A= 2015-02-21 19:01:41,986 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#132 Retry#0 Wrote 33 bytes.=0A= 2015-02-21 19:01:41,987 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #133=0A= 2015-02-21 19:01:41,987 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#133 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:41,987 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:41,987 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:41,987 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#133 Retry#0=0A= 2015-02-21 19:01:41,987 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#133 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:41,988 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #134=0A= 2015-02-21 19:01:41,989 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#134 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:41,989 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:41,989 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:41,989 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#134 Retry#0=0A= 2015-02-21 19:01:41,989 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#134 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:42,653 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #58=0A= 2015-02-21 19:01:42,654 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #58=0A= 2015-02-21 19:01:42,654 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 1ms=0A= 2015-02-21 19:01:42,655 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = getResources() for application_1424550134651_0002: ask=3D1 release=3D 0 = newContainers=3D0 finishedContainers=3D0 resourcelimit=3D knownNMs=3D4=0A= 2015-02-21 19:01:42,991 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #135=0A= 2015-02-21 19:01:42,991 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#135 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:42,991 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:42,991 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:42,991 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#135 Retry#0=0A= 2015-02-21 19:01:42,991 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#135 Retry#0 Wrote 33 bytes.=0A= 2015-02-21 19:01:42,992 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #136=0A= 2015-02-21 19:01:42,992 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#136 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:42,992 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:42,993 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:42,993 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#136 Retry#0=0A= 2015-02-21 19:01:42,993 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#136 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:42,994 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #137=0A= 2015-02-21 19:01:42,994 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#137 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:42,994 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:42,994 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:42,995 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#137 Retry#0=0A= 2015-02-21 19:01:42,995 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#137 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:43,305 DEBUG [IPC Server listener on 35954] = org.apache.hadoop.ipc.Server: Server connection from = 192.168.2.252:43604; # active connections: 2; # queued calls: 0=0A= 2015-02-21 19:01:43,329 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:43,330 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: Created SASL server with = mechanism =3D DIGEST-MD5=0A= 2015-02-21 19:01:43,330 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"/5UGGzJuu7QCWYIfegTfHFB0l9N9/kAczChWfOoE\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 19:01:43,330 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43604 Call#-33 Retry#-1=0A= 2015-02-21 19:01:43,330 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43604 Call#-33 Retry#-1 Wrote 178 = bytes.=0A= 2015-02-21 19:01:43,438 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:43,438 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Have read input token of size 270 for = processing by saslServer.evaluateResponse()=0A= 2015-02-21 19:01:43,439 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting password for client: job_1424550134651_0002 = (auth:SIMPLE)=0A= 2015-02-21 19:01:43,439 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting canonicalized client ID: job_1424550134651_0002=0A= 2015-02-21 19:01:43,439 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Will send SUCCESS token of size 40 from = saslServer.=0A= 2015-02-21 19:01:43,439 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server context established. = Negotiated QoP is auth=0A= 2015-02-21 19:01:43,439 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server successfully authenticated = client: job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:43,439 INFO [Socket Reader #1 for port 35954] = SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for = job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:43,439 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: SUCCESS=0A= token: "rspauth=3Ddd08b2b3429aa29df029b400450183a0"=0A= =0A= 2015-02-21 19:01:43,440 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43604 Call#-33 Retry#-1=0A= 2015-02-21 19:01:43,440 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43604 Call#-33 Retry#-1 Wrote 64 = bytes.=0A= 2015-02-21 19:01:43,452 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-3=0A= 2015-02-21 19:01:43,452 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Successfully authorized userInfo {=0A= }=0A= protocol: "org.apache.hadoop.mapred.TaskUmbilicalProtocol"=0A= =0A= 2015-02-21 19:01:43,452 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #0=0A= 2015-02-21 19:01:43,452 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: = getTask(org.apache.hadoop.mapred.JvmContext@2dc61522), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#0 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:43,453 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:43,453 INFO [IPC Server handler 2 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : = jvm_1424550134651_0002_r_000010 asked for a task=0A= 2015-02-21 19:01:43,453 INFO [IPC Server handler 2 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: = jvm_1424550134651_0002_r_000010 given task: = attempt_1424550134651_0002_r_000001_0=0A= 2015-02-21 19:01:43,453 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: Served: getTask queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:43,455 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@2dc61522), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#0 Retry#0=0A= 2015-02-21 19:01:43,455 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@2dc61522), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#0 Retry#0 Wrote 450 bytes.=0A= 2015-02-21 19:01:43,655 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #59=0A= 2015-02-21 19:01:43,657 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #59=0A= 2015-02-21 19:01:43,657 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:43,985 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #4=0A= 2015-02-21 19:01:43,985 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: = statusUpdate(attempt_1424550134651_0002_m_000002_0, = org.apache.hadoop.mapred.MapTaskStatus@7316d30b), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34286 Call#4 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:43,985 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:43,986 INFO [IPC Server handler 6 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_m_000002_0 is : 0.0=0A= 2015-02-21 19:01:43,986 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:43,986 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:43,986 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000002_0, = org.apache.hadoop.mapred.MapTaskStatus@7316d30b), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34286 Call#4 Retry#0=0A= 2015-02-21 19:01:43,986 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000002_0 of type TA_UPDATE=0A= 2015-02-21 19:01:43,986 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000002_0, = org.apache.hadoop.mapred.MapTaskStatus@7316d30b), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34286 Call#4 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:43,986 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:43,996 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #138=0A= 2015-02-21 19:01:43,996 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#138 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:43,996 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:43,996 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:43,997 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#138 Retry#0=0A= 2015-02-21 19:01:43,997 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#138 Retry#0 Wrote 33 bytes.=0A= 2015-02-21 19:01:43,997 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #139=0A= 2015-02-21 19:01:43,997 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#139 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:43,998 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:43,998 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:43,998 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#139 Retry#0=0A= 2015-02-21 19:01:43,998 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#139 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:43,999 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #140=0A= 2015-02-21 19:01:43,999 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#140 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:43,999 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:44,000 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:44,000 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#140 Retry#0=0A= 2015-02-21 19:01:44,000 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#140 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:44,234 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #5=0A= 2015-02-21 19:01:44,235 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 35954: = ping(attempt_1424550134651_0002_m_000002_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34286 = Call#5 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:44,235 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:44,235 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from = attempt_1424550134651_0002_m_000002_0=0A= 2015-02-21 19:01:44,235 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.ipc.Server: Served: ping queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:44,235 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 35954: responding = to ping(attempt_1424550134651_0002_m_000002_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34286 = Call#5 Retry#0=0A= 2015-02-21 19:01:44,235 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 35954: responding = to ping(attempt_1424550134651_0002_m_000002_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34286 = Call#5 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:44,497 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: REQUEST = /ws/v1/mapreduce/jobs/job_1424550134651_0002 on = org.mortbay.jetty.HttpConnection@23f3c60c=0A= 2015-02-21 19:01:44,498 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = sessionManager=3Dorg.mortbay.jetty.servlet.HashSessionManager@4befbfaf=0A= 2015-02-21 19:01:44,498 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = session=3Dnull=0A= 2015-02-21 19:01:44,498 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = servlet=3Ddefault=0A= 2015-02-21 19:01:44,498 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = chain=3DNoCacheFilter->safety->AM_PROXY_FILTER->guice->default=0A= 2015-02-21 19:01:44,498 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: servlet = holder=3Ddefault=0A= 2015-02-21 19:01:44,498 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter NoCacheFilter=0A= 2015-02-21 19:01:44,498 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter safety=0A= 2015-02-21 19:01:44,498 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter AM_PROXY_FILTER=0A= 2015-02-21 19:01:44,498 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] = org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter: Remote = address for request is: 192.168.2.253=0A= 2015-02-21 19:01:44,498 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] = org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter: proxy = address is: 192.168.2.253=0A= 2015-02-21 19:01:44,498 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter guice=0A= 2015-02-21 19:01:44,501 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: RESPONSE = /ws/v1/mapreduce/jobs/job_1424550134651_0002 200=0A= 2015-02-21 19:01:44,502 DEBUG [970736822@qtp-700266387-0] = org.mortbay.log: EOF=0A= 2015-02-21 19:01:44,658 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #60=0A= 2015-02-21 19:01:44,659 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #60=0A= 2015-02-21 19:01:44,659 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 1ms=0A= 2015-02-21 19:01:44,752 DEBUG [IPC Server listener on 35954] = org.apache.hadoop.ipc.Server: Server connection from = 192.168.2.251:58639; # active connections: 3; # queued calls: 0=0A= 2015-02-21 19:01:44,791 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:44,791 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: Created SASL server with = mechanism =3D DIGEST-MD5=0A= 2015-02-21 19:01:44,792 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"lshubnxxJ5eotceGb/RUk5Cpm3kw8wHOc5jx4O1/\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 19:01:44,792 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.251:58639 Call#-33 Retry#-1=0A= 2015-02-21 19:01:44,792 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.251:58639 Call#-33 Retry#-1 Wrote 178 = bytes.=0A= 2015-02-21 19:01:44,934 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #7=0A= 2015-02-21 19:01:44,935 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: = statusUpdate(attempt_1424550134651_0002_m_000002_0, = org.apache.hadoop.mapred.MapTaskStatus@5ef4c3a), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34286 = Call#7 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:44,936 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:44,936 INFO [IPC Server handler 6 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_m_000002_0 is : 1.0=0A= 2015-02-21 19:01:44,937 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 1 = procesingTime=3D 1=0A= 2015-02-21 19:01:44,937 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000002_0, = org.apache.hadoop.mapred.MapTaskStatus@5ef4c3a), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34286 = Call#7 Retry#0=0A= 2015-02-21 19:01:44,937 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_m_000002_0, = org.apache.hadoop.mapred.MapTaskStatus@5ef4c3a), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34286 = Call#7 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:44,937 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:44,938 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000002_0 of type TA_UPDATE=0A= 2015-02-21 19:01:44,938 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:44,939 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #8=0A= 2015-02-21 19:01:44,939 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: = done(attempt_1424550134651_0002_m_000002_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34286 = Call#8 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:44,939 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:44,939 INFO [IPC Server handler 8 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement = from attempt_1424550134651_0002_m_000002_0=0A= 2015-02-21 19:01:44,939 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: Served: done queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:44,939 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_DONE=0A= 2015-02-21 19:01:44,940 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: responding = to done(attempt_1424550134651_0002_m_000002_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34286 = Call#8 Retry#0=0A= 2015-02-21 19:01:44,940 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000002_0 of type TA_DONE=0A= 2015-02-21 19:01:44,940 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: responding = to done(attempt_1424550134651_0002_m_000002_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34286 = Call#8 Retry#0 Wrote 118 bytes.=0A= 2015-02-21 19:01:44,940 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000002_0 TaskAttempt Transitioned from = RUNNING to SUCCESS_CONTAINER_CLEANUP=0A= 2015-02-21 19:01:44,940 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherEvent.EventT= ype: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000008 taskAttempt = attempt_1424550134651_0002_m_000002_0=0A= 2015-02-21 19:01:44,940 INFO [ContainerLauncher #1] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000008 taskAttempt = attempt_1424550134651_0002_m_000002_0=0A= 2015-02-21 19:01:44,940 INFO [ContainerLauncher #1] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = KILLING attempt_1424550134651_0002_m_000002_0=0A= 2015-02-21 19:01:44,940 INFO [ContainerLauncher #1] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop1.rdpratti.com:8041=0A= 2015-02-21 19:01:44,941 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.250:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@4a555fcc)=0A= 2015-02-21 19:01:44,941 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:44,941 DEBUG [ContainerLauncher #1] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:44,941 DEBUG [ContainerLauncher #1] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:44,941 DEBUG [ContainerLauncher #1] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:44,941 DEBUG [ContainerLauncher #1] = org.apache.hadoop.ipc.Client: Connecting to = hadoop1.rdpratti.com/192.168.2.250:8041=0A= 2015-02-21 19:01:44,942 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:44,942 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = disconnecting client 192.168.2.250:34286. Number of active connections: 2=0A= 2015-02-21 19:01:44,942 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:44,944 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"K36Qn9PBMokH5SaFD7XSzbi206rNeP+JLUS7pnge\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:44,944 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@3da73= 6e7=0A= 2015-02-21 19:01:44,944 INFO [ContainerLauncher #1] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.250:8041. Current token is Kind: NMToken, Service: = 192.168.2.250:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@2d059750)=0A= 2015-02-21 19:01:44,944 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:44,944 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:44,944 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMS5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:44,944 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:44,944 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:44,945 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMS5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"K36Qn9P= BMokH5SaFD7XSzbi206rNeP+JLUS7pnge\",nc=3D00000001,cnonce=3D\"M4ITbIXWqSWv= IraHhzkj6/CXWFa5SKU4f11d7aQQ\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3D07bc842dfe6da911e8c8b689833bae47,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:44,949 DEBUG [ContainerLauncher #1] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D786fa995cedb19dc0e809c2a408e8ea2"=0A= =0A= 2015-02-21 19:01:44,949 DEBUG [ContainerLauncher #1] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:44,956 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001 sending #61=0A= 2015-02-21 19:01:44,956 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 2=0A= 2015-02-21 19:01:44,962 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001 got value #61=0A= 2015-02-21 19:01:44,962 DEBUG [ContainerLauncher #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: stopContainers took 21ms=0A= 2015-02-21 19:01:44,962 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:44,962 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 1=0A= 2015-02-21 19:01:44,962 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:44,962 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000002_0 of type TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:44,962 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_m_000002_0 TaskAttempt Transitioned from = SUCCESS_CONTAINER_CLEANUP to SUCCEEDED=0A= 2015-02-21 19:01:44,962 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:44,963 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:44,963 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = MAP_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:44,963 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:44,963 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:44,963 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_m_000002 of type T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:44,963 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded = with attempt attempt_1424550134651_0002_m_000002_0=0A= 2015-02-21 19:01:44,963 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_m_000002 Task Transitioned from RUNNING to = SUCCEEDED=0A= 2015-02-21 19:01:44,963 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:44,963 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskAttemptCompletedEvent= .EventType: JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:44,963 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:44,963 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskEvent.EventType: = JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:44,963 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:44,963 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed = Tasks: 5=0A= 2015-02-21 19:01:44,963 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_FINISHED=0A= 2015-02-21 19:01:44,965 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:44,965 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 40765 lastFlushOffset 36716=0A= 2015-02-21 19:01:44,965 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 8=0A= 2015-02-21 19:01:44,965 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 8=0A= 2015-02-21 19:01:44,966 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:8 offsetInBlock:36352 lastPacketInBlock:false = lastByteOffsetInBlock: 40765=0A= 2015-02-21 19:01:44,968 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 8 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1834981=0A= 2015-02-21 19:01:44,968 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler MAP_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:44,968 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:44,970 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D9, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D40448=0A= 2015-02-21 19:01:44,970 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:44,970 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 43504 lastFlushOffset 40765=0A= 2015-02-21 19:01:44,970 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 9=0A= 2015-02-21 19:01:44,970 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 9=0A= 2015-02-21 19:01:44,971 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:9 offsetInBlock:40448 lastPacketInBlock:false = lastByteOffsetInBlock: 43504=0A= 2015-02-21 19:01:44,973 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 9 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1824112=0A= 2015-02-21 19:01:44,973 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_FINISHED=0A= 2015-02-21 19:01:44,974 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:44,974 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Have read input token of size 270 for = processing by saslServer.evaluateResponse()=0A= 2015-02-21 19:01:44,975 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting password for client: job_1424550134651_0002 = (auth:SIMPLE)=0A= 2015-02-21 19:01:44,975 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting canonicalized client ID: job_1424550134651_0002=0A= 2015-02-21 19:01:44,976 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Will send SUCCESS token of size 40 from = saslServer.=0A= 2015-02-21 19:01:44,976 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server context established. = Negotiated QoP is auth=0A= 2015-02-21 19:01:44,976 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server successfully authenticated = client: job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:44,976 INFO [Socket Reader #1 for port 35954] = SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for = job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:44,976 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: SUCCESS=0A= token: "rspauth=3Df727c401ba56eb1b72222f182b66dfd1"=0A= =0A= 2015-02-21 19:01:44,976 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.251:58639 Call#-33 Retry#-1=0A= 2015-02-21 19:01:44,976 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.251:58639 Call#-33 Retry#-1 Wrote 64 = bytes.=0A= 2015-02-21 19:01:45,002 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #141=0A= 2015-02-21 19:01:45,002 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#141 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:45,002 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:45,002 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:45,003 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#141 Retry#0=0A= 2015-02-21 19:01:45,003 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#141 Retry#0 Wrote 102 bytes.=0A= 2015-02-21 19:01:45,004 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #142=0A= 2015-02-21 19:01:45,004 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#142 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:45,004 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:45,004 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:45,004 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#142 Retry#0=0A= 2015-02-21 19:01:45,004 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#142 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:45,005 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-3=0A= 2015-02-21 19:01:45,005 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #143=0A= 2015-02-21 19:01:45,005 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Successfully authorized userInfo {=0A= }=0A= protocol: "org.apache.hadoop.mapred.TaskUmbilicalProtocol"=0A= =0A= 2015-02-21 19:01:45,005 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#143 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:45,006 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:45,006 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #0=0A= 2015-02-21 19:01:45,006 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 35954: = getTask(org.apache.hadoop.mapred.JvmContext@44749238), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#0 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:45,006 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:45,006 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#143 Retry#0=0A= 2015-02-21 19:01:45,006 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#143 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:45,007 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:45,007 INFO [IPC Server handler 11 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : = jvm_1424550134651_0002_r_000009 asked for a task=0A= 2015-02-21 19:01:45,007 INFO [IPC Server handler 11 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: = jvm_1424550134651_0002_r_000009 given task: = attempt_1424550134651_0002_r_000000_0=0A= 2015-02-21 19:01:45,007 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: Served: getTask queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:45,008 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@44749238), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#0 Retry#0=0A= 2015-02-21 19:01:45,008 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@44749238), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#0 Retry#0 Wrote 450 bytes.=0A= 2015-02-21 19:01:45,105 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #1=0A= 2015-02-21 19:01:45,105 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: = getMapCompletionEvents(job_1424550134651_0002, 0, 10000, = attempt_1424550134651_0002_r_000001_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43604 = Call#1 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:45,105 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:45,105 INFO [IPC Server handler 4 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: MapCompletionEvents = request from attempt_1424550134651_0002_r_000001_0. startIndex 0 = maxEvents 10000=0A= 2015-02-21 19:01:45,106 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: Served: getMapCompletionEvents = queueTime=3D 0 procesingTime=3D 1=0A= 2015-02-21 19:01:45,106 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: responding = to getMapCompletionEvents(job_1424550134651_0002, 0, 10000, = attempt_1424550134651_0002_r_000001_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43604 = Call#1 Retry#0=0A= 2015-02-21 19:01:45,106 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: responding = to getMapCompletionEvents(job_1424550134651_0002, 0, 10000, = attempt_1424550134651_0002_r_000001_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43604 = Call#1 Retry#0 Wrote 553 bytes.=0A= 2015-02-21 19:01:45,660 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before = Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:2 AssignedMaps:1 = AssignedReds:2 CompletedMaps:5 CompletedReds:0 ContAlloc:7 ContRel:0 = HostLocal:2 RackLocal:3=0A= 2015-02-21 19:01:45,660 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #62=0A= 2015-02-21 19:01:45,662 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #62=0A= 2015-02-21 19:01:45,662 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:45,823 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #2=0A= 2015-02-21 19:01:45,824 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: = statusUpdate(attempt_1424550134651_0002_r_000001_0, = org.apache.hadoop.mapred.ReduceTaskStatus@73a8392e), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#2 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:45,824 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:45,824 INFO [IPC Server handler 6 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_r_000001_0 is : 0.0=0A= 2015-02-21 19:01:45,825 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:45,825 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:45,825 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000001_0 of type TA_UPDATE=0A= 2015-02-21 19:01:45,825 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000001_0, = org.apache.hadoop.mapred.ReduceTaskStatus@73a8392e), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#2 Retry#0=0A= 2015-02-21 19:01:45,825 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000001_0, = org.apache.hadoop.mapred.ReduceTaskStatus@73a8392e), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#2 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:45,825 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:46,008 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #144=0A= 2015-02-21 19:01:46,008 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#144 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:46,008 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:46,009 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 1=0A= 2015-02-21 19:01:46,009 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#144 Retry#0=0A= 2015-02-21 19:01:46,009 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#144 Retry#0 Wrote 33 bytes.=0A= 2015-02-21 19:01:46,009 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #145=0A= 2015-02-21 19:01:46,009 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#145 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:46,010 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:46,010 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:46,010 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#145 Retry#0=0A= 2015-02-21 19:01:46,010 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#145 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:46,011 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #146=0A= 2015-02-21 19:01:46,011 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#146 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:46,011 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:46,012 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:46,012 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#146 Retry#0=0A= 2015-02-21 19:01:46,012 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#146 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:46,185 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #3=0A= 2015-02-21 19:01:46,185 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 35954: = statusUpdate(attempt_1424550134651_0002_r_000001_0, = org.apache.hadoop.mapred.ReduceTaskStatus@1e9cc33d), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#3 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:46,185 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:46,186 INFO [IPC Server handler 28 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_r_000001_0 is : 0.0=0A= 2015-02-21 19:01:46,186 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:46,186 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:46,187 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000001_0, = org.apache.hadoop.mapred.ReduceTaskStatus@1e9cc33d), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#3 Retry#0=0A= 2015-02-21 19:01:46,187 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000001_0, = org.apache.hadoop.mapred.ReduceTaskStatus@1e9cc33d), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#3 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:46,187 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000001_0 of type TA_UPDATE=0A= 2015-02-21 19:01:46,187 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:46,662 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #63=0A= 2015-02-21 19:01:46,665 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #63=0A= 2015-02-21 19:01:46,665 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 3ms=0A= 2015-02-21 19:01:46,665 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received new = Container :Container: [ContainerId: = container_1424550134651_0002_01_000011, NodeId: = hadoop1.rdpratti.com:8041, NodeHttpAddress: hadoop1.rdpratti.com:8042, = Resource: , Priority: 10, Token: Token { kind: = ContainerToken, service: 192.168.2.250:8041 }, ]=0A= 2015-02-21 19:01:46,665 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received = completed container container_1424550134651_0002_01_000008=0A= 2015-02-21 19:01:46,665 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got = allocated containers 1=0A= 2015-02-21 19:01:46,665 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:46,666 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning = container container_1424550134651_0002_01_000011 with priority 10 to NM = hadoop1.rdpratti.com:8041=0A= 2015-02-21 19:01:46,666 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000002_0 of type TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:46,666 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning = container Container: [ContainerId: = container_1424550134651_0002_01_000011, NodeId: = hadoop1.rdpratti.com:8041, NodeHttpAddress: hadoop1.rdpratti.com:8042, = Resource: , Priority: 10, Token: Token { kind: = ContainerToken, service: 192.168.2.250:8041 }, ] to reduce=0A= 2015-02-21 19:01:46,666 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptDiagnosticsUpdate= Event.EventType: TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:46,666 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned to = reduce=0A= 2015-02-21 19:01:46,666 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D10 resourceName=3D* = numContainers=3D2 #asks=3D0=0A= 2015-02-21 19:01:46,666 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_m_000002_0 of type TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:46,666 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D10 resourceName=3D* = numContainers=3D1 #asks=3D1=0A= 2015-02-21 19:01:46,666 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics = report from attempt_1424550134651_0002_m_000002_0: Container killed by = the ApplicationMaster.=0A= Container killed on request. Exit code is 143=0A= Container exited with a non-zero exit code 143=0A= =0A= 2015-02-21 19:01:46,666 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerAssigned= Event.EventType: TA_ASSIGNED=0A= 2015-02-21 19:01:46,666 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container container_1424550134651_0002_01_000011 to = attempt_1424550134651_0002_r_000002_0=0A= 2015-02-21 19:01:46,666 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000002_0 of type TA_ASSIGNED=0A= 2015-02-21 19:01:46,666 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container (Container: [ContainerId: = container_1424550134651_0002_01_000011, NodeId: = hadoop1.rdpratti.com:8041, NodeHttpAddress: hadoop1.rdpratti.com:8042, = Resource: , Priority: 10, Token: Token { kind: = ContainerToken, service: 192.168.2.250:8041 }, ]) to task = attempt_1424550134651_0002_r_000002_0 on node hadoop1.rdpratti.com:8041=0A= 2015-02-21 19:01:46,666 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapred.SortedRanges: currentIndex 0 0:0=0A= 2015-02-21 19:01:46,666 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After = Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0 = AssignedReds:3 CompletedMaps:5 CompletedReds:0 ContAlloc:8 ContRel:0 = HostLocal:2 RackLocal:3=0A= 2015-02-21 19:01:46,666 INFO [AsyncDispatcher event handler] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop1.rdpratti.com = to /default=0A= 2015-02-21 19:01:46,667 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000002_0 TaskAttempt Transitioned from = UNASSIGNED to ASSIGNED=0A= 2015-02-21 19:01:46,667 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerRemoteLaunchEvent.Ev= entType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000011 taskAttempt = attempt_1424550134651_0002_r_000002_0=0A= 2015-02-21 19:01:46,667 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:46,667 INFO [ContainerLauncher #2] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000011 taskAttempt = attempt_1424550134651_0002_r_000002_0=0A= 2015-02-21 19:01:46,667 INFO [ContainerLauncher #2] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Launching attempt_1424550134651_0002_r_000002_0=0A= 2015-02-21 19:01:46,667 INFO [ContainerLauncher #2] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop1.rdpratti.com:8041=0A= 2015-02-21 19:01:46,668 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.250:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@40fb8e65)=0A= 2015-02-21 19:01:46,668 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:46,668 DEBUG [ContainerLauncher #2] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:46,668 DEBUG [ContainerLauncher #2] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:46,669 DEBUG [ContainerLauncher #2] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:46,669 DEBUG [ContainerLauncher #2] = org.apache.hadoop.ipc.Client: Connecting to = hadoop1.rdpratti.com/192.168.2.250:8041=0A= 2015-02-21 19:01:46,670 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:46,670 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:46,671 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"GPnyzk5o+kwN8QTXdmv7yi8id1b//vhoKoXHxlbl\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:46,671 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@4c0f2= 001=0A= 2015-02-21 19:01:46,672 INFO [ContainerLauncher #2] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.250:8041. Current token is Kind: NMToken, Service: = 192.168.2.250:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@74fe48a9)=0A= 2015-02-21 19:01:46,672 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:46,672 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:46,672 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMS5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:46,672 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:46,672 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:46,673 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMS5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"GPnyzk5= o+kwN8QTXdmv7yi8id1b//vhoKoXHxlbl\",nc=3D00000001,cnonce=3D\"tmNk10XeOxNL= GlJQjwMljmv6Tdb5mqA/RNQur/Pk\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3Dc32d25ed1fe09a177c0ca7b6c08a0f67,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:46,676 DEBUG [ContainerLauncher #2] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3Dc37fd8c2f3f5d10a1de9cc587ad833d7"=0A= =0A= 2015-02-21 19:01:46,676 DEBUG [ContainerLauncher #2] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:46,677 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 2=0A= 2015-02-21 19:01:46,678 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001 sending #64=0A= 2015-02-21 19:01:46,695 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001 got value #64=0A= 2015-02-21 19:01:46,695 DEBUG [ContainerLauncher #2] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: startContainers took 26ms=0A= 2015-02-21 19:01:46,696 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:46,696 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 1=0A= 2015-02-21 19:01:46,696 INFO [ContainerLauncher #2] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Shuffle port returned by ContainerManager for = attempt_1424550134651_0002_r_000002_0 : 13562=0A= 2015-02-21 19:01:46,696 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerLaunched= Event.EventType: TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:46,696 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000002_0 of type TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:46,696 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = TaskAttempt: [attempt_1424550134651_0002_r_000002_0] using containerId: = [container_1424550134651_0002_01_000011 on NM: = [hadoop1.rdpratti.com:8041]=0A= 2015-02-21 19:01:46,696 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000002_0 TaskAttempt Transitioned from = ASSIGNED to RUNNING=0A= 2015-02-21 19:01:46,696 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:46,696 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:46,696 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = REDUCE_ATTEMPT_STARTED=0A= 2015-02-21 19:01:46,697 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_START=0A= 2015-02-21 19:01:46,697 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:46,697 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:46,697 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000002 of type T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:46,697 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_r_000002 Task Transitioned from SCHEDULED to = RUNNING=0A= 2015-02-21 19:01:46,697 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D10, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D43008=0A= 2015-02-21 19:01:46,697 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler REDUCE_ATTEMPT_STARTED=0A= 2015-02-21 19:01:46,989 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #9=0A= 2015-02-21 19:01:46,989 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 35954: = commitPending(attempt_1424550134651_0002_r_000001_0, = org.apache.hadoop.mapred.ReduceTaskStatus@7216670f), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#9 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:46,989 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:46,989 INFO [IPC Server handler 23 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Commit-pending state = update from attempt_1424550134651_0002_r_000001_0=0A= 2015-02-21 19:01:46,989 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: Served: commitPending queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:46,989 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_COMMIT_PENDING=0A= 2015-02-21 19:01:46,990 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 35954: responding = to commitPending(attempt_1424550134651_0002_r_000001_0, = org.apache.hadoop.mapred.ReduceTaskStatus@7216670f), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#9 Retry#0=0A= 2015-02-21 19:01:46,990 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000001_0 of type TA_COMMIT_PENDING=0A= 2015-02-21 19:01:46,990 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 35954: responding = to commitPending(attempt_1424550134651_0002_r_000001_0, = org.apache.hadoop.mapred.ReduceTaskStatus@7216670f), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#9 Retry#0 Wrote 118 bytes.=0A= 2015-02-21 19:01:46,990 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000001_0 TaskAttempt Transitioned from = RUNNING to COMMIT_PENDING=0A= 2015-02-21 19:01:46,990 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_COMMIT_PENDING=0A= 2015-02-21 19:01:46,990 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000001 of type T_ATTEMPT_COMMIT_PENDING=0A= 2015-02-21 19:01:46,990 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = attempt_1424550134651_0002_r_000001_0 given a go for committing the task = output.=0A= 2015-02-21 19:01:46,996 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #10=0A= 2015-02-21 19:01:46,997 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 35954: = canCommit(attempt_1424550134651_0002_r_000001_0), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#10 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:46,997 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:46,997 INFO [IPC Server handler 11 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Commit go/no-go = request from attempt_1424550134651_0002_r_000001_0=0A= 2015-02-21 19:01:46,997 INFO [IPC Server handler 11 on 35954] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Result of = canCommit for attempt_1424550134651_0002_r_000001_0:true=0A= 2015-02-21 19:01:46,997 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: Served: canCommit queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:46,997 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 35954: responding = to canCommit(attempt_1424550134651_0002_r_000001_0), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#10 Retry#0=0A= 2015-02-21 19:01:46,997 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 35954: responding = to canCommit(attempt_1424550134651_0002_r_000001_0), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#10 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:47,013 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #147=0A= 2015-02-21 19:01:47,013 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#147 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:47,013 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:47,014 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 1 procesingTime=3D 0=0A= 2015-02-21 19:01:47,014 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#147 Retry#0=0A= 2015-02-21 19:01:47,014 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#147 Retry#0 Wrote 33 bytes.=0A= 2015-02-21 19:01:47,015 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #148=0A= 2015-02-21 19:01:47,015 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#148 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:47,015 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:47,015 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:47,015 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#148 Retry#0=0A= 2015-02-21 19:01:47,015 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#148 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:47,016 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #149=0A= 2015-02-21 19:01:47,016 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#149 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:47,016 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:47,017 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:47,017 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#149 Retry#0=0A= 2015-02-21 19:01:47,017 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#149 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:47,072 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #14=0A= 2015-02-21 19:01:47,072 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: = statusUpdate(attempt_1424550134651_0002_r_000001_0, = org.apache.hadoop.mapred.ReduceTaskStatus@13983fdd), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#14 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:47,073 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:47,073 INFO [IPC Server handler 4 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_r_000001_0 is : 0.7395436=0A= 2015-02-21 19:01:47,075 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 1 = procesingTime=3D 2=0A= 2015-02-21 19:01:47,075 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:47,075 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000001_0, = org.apache.hadoop.mapred.ReduceTaskStatus@13983fdd), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#14 Retry#0=0A= 2015-02-21 19:01:47,075 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000001_0 of type TA_UPDATE=0A= 2015-02-21 19:01:47,075 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000001_0, = org.apache.hadoop.mapred.ReduceTaskStatus@13983fdd), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43604 Call#14 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:47,075 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:47,076 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #15=0A= 2015-02-21 19:01:47,076 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 35954: = done(attempt_1424550134651_0002_r_000001_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43604 = Call#15 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:47,076 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:47,077 INFO [IPC Server handler 29 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement = from attempt_1424550134651_0002_r_000001_0=0A= 2015-02-21 19:01:47,077 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: Served: done queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:47,077 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_DONE=0A= 2015-02-21 19:01:47,077 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 35954: responding = to done(attempt_1424550134651_0002_r_000001_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43604 = Call#15 Retry#0=0A= 2015-02-21 19:01:47,077 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000001_0 of type TA_DONE=0A= 2015-02-21 19:01:47,078 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 35954: responding = to done(attempt_1424550134651_0002_r_000001_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43604 = Call#15 Retry#0 Wrote 118 bytes.=0A= 2015-02-21 19:01:47,078 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000001_0 TaskAttempt Transitioned from = COMMIT_PENDING to SUCCESS_CONTAINER_CLEANUP=0A= 2015-02-21 19:01:47,078 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherEvent.EventT= ype: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000010 taskAttempt = attempt_1424550134651_0002_r_000001_0=0A= 2015-02-21 19:01:47,078 INFO [ContainerLauncher #3] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000010 taskAttempt = attempt_1424550134651_0002_r_000001_0=0A= 2015-02-21 19:01:47,078 INFO [ContainerLauncher #3] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = KILLING attempt_1424550134651_0002_r_000001_0=0A= 2015-02-21 19:01:47,078 INFO [ContainerLauncher #3] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:47,078 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@eaf5ffe)=0A= 2015-02-21 19:01:47,079 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:47,079 DEBUG [ContainerLauncher #3] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:47,079 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = disconnecting client 192.168.2.252:43604. Number of active connections: 1=0A= 2015-02-21 19:01:47,079 DEBUG [ContainerLauncher #3] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:47,079 DEBUG [ContainerLauncher #3] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:47,079 DEBUG [ContainerLauncher #3] = org.apache.hadoop.ipc.Client: Connecting to = hadoop3.rdpratti.com/192.168.2.252:8041=0A= 2015-02-21 19:01:47,080 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:47,080 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:47,081 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"Da25IS/AWen6ZkJebNtTL7GC3hghZFGHa5Uf1Ts8\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:47,081 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@321e3= 02b=0A= 2015-02-21 19:01:47,081 INFO [ContainerLauncher #3] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.252:8041. Current token is Kind: NMToken, Service: = 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@5bec70c1)=0A= 2015-02-21 19:01:47,081 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:47,082 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:47,082 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:47,082 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:47,082 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:47,083 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"Da25IS/= AWen6ZkJebNtTL7GC3hghZFGHa5Uf1Ts8\",nc=3D00000001,cnonce=3D\"TJM0RogS/VuE= 5s4T89kp6igV3JHDdreY/s97AC6O\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3Df0d5d29681f8c13076e8ffca807270c1,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:47,085 DEBUG [ContainerLauncher #3] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D1c35e05fd570231a23b0931145fe5239"=0A= =0A= 2015-02-21 19:01:47,085 DEBUG [ContainerLauncher #3] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:47,085 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 2=0A= 2015-02-21 19:01:47,085 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 sending #65=0A= 2015-02-21 19:01:47,091 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 got value #65=0A= 2015-02-21 19:01:47,091 DEBUG [ContainerLauncher #3] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: stopContainers took 12ms=0A= 2015-02-21 19:01:47,092 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:47,092 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 1=0A= 2015-02-21 19:01:47,092 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:47,092 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000001_0 of type TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:47,093 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000001_0 TaskAttempt Transitioned from = SUCCESS_CONTAINER_CLEANUP to SUCCEEDED=0A= 2015-02-21 19:01:47,093 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:47,093 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:47,093 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = REDUCE_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:47,093 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:47,093 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:47,094 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000001 of type T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:47,094 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded = with attempt attempt_1424550134651_0002_r_000001_0=0A= 2015-02-21 19:01:47,094 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_r_000001 Task Transitioned from RUNNING to = SUCCEEDED=0A= 2015-02-21 19:01:47,094 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:47,094 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskAttemptCompletedEvent= .EventType: JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:47,094 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:47,094 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskEvent.EventType: = JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:47,094 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:47,094 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed = Tasks: 6=0A= 2015-02-21 19:01:47,094 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_FINISHED=0A= 2015-02-21 19:01:47,100 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:47,100 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 47652 lastFlushOffset 43504=0A= 2015-02-21 19:01:47,100 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 10=0A= 2015-02-21 19:01:47,100 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 10=0A= 2015-02-21 19:01:47,100 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:10 offsetInBlock:43008 lastPacketInBlock:false = lastByteOffsetInBlock: 47652=0A= 2015-02-21 19:01:47,104 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 10 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1700047=0A= 2015-02-21 19:01:47,104 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler REDUCE_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:47,104 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:47,106 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D11, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D47616=0A= 2015-02-21 19:01:47,107 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:47,107 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 50882 lastFlushOffset 47652=0A= 2015-02-21 19:01:47,107 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 11=0A= 2015-02-21 19:01:47,107 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 11=0A= 2015-02-21 19:01:47,107 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:11 offsetInBlock:47616 lastPacketInBlock:false = lastByteOffsetInBlock: 50882=0A= 2015-02-21 19:01:47,109 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 11 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1616760=0A= 2015-02-21 19:01:47,109 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_FINISHED=0A= 2015-02-21 19:01:47,666 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before = Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:1 AssignedMaps:0 = AssignedReds:3 CompletedMaps:5 CompletedReds:1 ContAlloc:8 ContRel:0 = HostLocal:2 RackLocal:3=0A= 2015-02-21 19:01:47,667 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #66=0A= 2015-02-21 19:01:47,668 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #66=0A= 2015-02-21 19:01:47,668 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 1ms=0A= 2015-02-21 19:01:47,668 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = getResources() for application_1424550134651_0002: ask=3D1 release=3D 0 = newContainers=3D0 finishedContainers=3D0 resourcelimit=3D knownNMs=3D4=0A= 2015-02-21 19:01:48,018 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #150=0A= 2015-02-21 19:01:48,019 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#150 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:48,019 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:48,019 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:48,019 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#150 Retry#0=0A= 2015-02-21 19:01:48,019 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#150 Retry#0 Wrote 102 bytes.=0A= 2015-02-21 19:01:48,020 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #151=0A= 2015-02-21 19:01:48,020 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#151 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:48,020 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:48,021 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:48,021 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#151 Retry#0=0A= 2015-02-21 19:01:48,021 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#151 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:48,022 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #152=0A= 2015-02-21 19:01:48,022 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#152 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:48,022 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:48,023 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:48,023 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#152 Retry#0=0A= 2015-02-21 19:01:48,023 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#152 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:48,196 DEBUG [IPC Server idle connection scanner for = port 59910] org.apache.hadoop.ipc.Server: IPC Server idle connection = scanner for port 59910: task running=0A= 2015-02-21 19:01:48,312 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #1=0A= 2015-02-21 19:01:48,312 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 35954: = getMapCompletionEvents(job_1424550134651_0002, 0, 10000, = attempt_1424550134651_0002_r_000000_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.251:58639 = Call#1 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:48,312 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:48,312 INFO [IPC Server handler 1 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: MapCompletionEvents = request from attempt_1424550134651_0002_r_000000_0. startIndex 0 = maxEvents 10000=0A= 2015-02-21 19:01:48,313 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: Served: getMapCompletionEvents = queueTime=3D 0 procesingTime=3D 1=0A= 2015-02-21 19:01:48,313 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 35954: responding = to getMapCompletionEvents(job_1424550134651_0002, 0, 10000, = attempt_1424550134651_0002_r_000000_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.251:58639 = Call#1 Retry#0=0A= 2015-02-21 19:01:48,313 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 35954: responding = to getMapCompletionEvents(job_1424550134651_0002, 0, 10000, = attempt_1424550134651_0002_r_000000_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.251:58639 = Call#1 Retry#0 Wrote 553 bytes.=0A= 2015-02-21 19:01:48,669 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #67=0A= 2015-02-21 19:01:48,671 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #67=0A= 2015-02-21 19:01:48,672 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 3ms=0A= 2015-02-21 19:01:48,672 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received new = Container :Container: [ContainerId: = container_1424550134651_0002_01_000014, NodeId: = hadoop3.rdpratti.com:8041, NodeHttpAddress: hadoop3.rdpratti.com:8042, = Resource: , Priority: 10, Token: Token { kind: = ContainerToken, service: 192.168.2.252:8041 }, ]=0A= 2015-02-21 19:01:48,672 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received = completed container container_1424550134651_0002_01_000010=0A= 2015-02-21 19:01:48,672 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:48,672 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got = allocated containers 1=0A= 2015-02-21 19:01:48,672 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning = container container_1424550134651_0002_01_000014 with priority 10 to NM = hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:48,672 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000001_0 of type TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:48,672 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptDiagnosticsUpdate= Event.EventType: TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:48,672 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning = container Container: [ContainerId: = container_1424550134651_0002_01_000014, NodeId: = hadoop3.rdpratti.com:8041, NodeHttpAddress: hadoop3.rdpratti.com:8042, = Resource: , Priority: 10, Token: Token { kind: = ContainerToken, service: 192.168.2.252:8041 }, ] to reduce=0A= 2015-02-21 19:01:48,672 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned to = reduce=0A= 2015-02-21 19:01:48,672 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000001_0 of type TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:48,672 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: BEFORE = decResourceRequest: applicationId=3D2 priority=3D10 resourceName=3D* = numContainers=3D1 #asks=3D0=0A= 2015-02-21 19:01:48,672 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics = report from attempt_1424550134651_0002_r_000001_0: Container killed by = the ApplicationMaster.=0A= Container killed on request. Exit code is 143=0A= Container exited with a non-zero exit code 143=0A= =0A= 2015-02-21 19:01:48,672 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: AFTER = decResourceRequest: applicationId=3D2 priority=3D10 resourceName=3D* = numContainers=3D0 #asks=3D1=0A= 2015-02-21 19:01:48,673 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerAssigned= Event.EventType: TA_ASSIGNED=0A= 2015-02-21 19:01:48,673 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container container_1424550134651_0002_01_000014 to = attempt_1424550134651_0002_r_000003_0=0A= 2015-02-21 19:01:48,673 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000003_0 of type TA_ASSIGNED=0A= 2015-02-21 19:01:48,673 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapred.SortedRanges: currentIndex 0 0:0=0A= 2015-02-21 19:01:48,673 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned = container (Container: [ContainerId: = container_1424550134651_0002_01_000014, NodeId: = hadoop3.rdpratti.com:8041, NodeHttpAddress: hadoop3.rdpratti.com:8042, = Resource: , Priority: 10, Token: Token { kind: = ContainerToken, service: 192.168.2.252:8041 }, ]) to task = attempt_1424550134651_0002_r_000003_0 on node hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:48,673 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After = Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 = AssignedReds:3 CompletedMaps:5 CompletedReds:1 ContAlloc:9 ContRel:0 = HostLocal:2 RackLocal:3=0A= 2015-02-21 19:01:48,673 INFO [AsyncDispatcher event handler] = org.apache.hadoop.yarn.util.RackResolver: Resolved hadoop3.rdpratti.com = to /default=0A= 2015-02-21 19:01:48,674 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000003_0 TaskAttempt Transitioned from = UNASSIGNED to ASSIGNED=0A= 2015-02-21 19:01:48,674 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerRemoteLaunchEvent.Ev= entType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000014 taskAttempt = attempt_1424550134651_0002_r_000003_0=0A= 2015-02-21 19:01:48,674 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = TASK_CONTAINER_NEED_UPDATE=0A= 2015-02-21 19:01:48,674 INFO [ContainerLauncher #4] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container = container_1424550134651_0002_01_000014 taskAttempt = attempt_1424550134651_0002_r_000003_0=0A= 2015-02-21 19:01:48,674 INFO [ContainerLauncher #4] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Launching attempt_1424550134651_0002_r_000003_0=0A= 2015-02-21 19:01:48,674 INFO [ContainerLauncher #4] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:48,674 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@597ff12a)=0A= 2015-02-21 19:01:48,674 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:48,675 DEBUG [ContainerLauncher #4] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:48,675 DEBUG [ContainerLauncher #4] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:48,676 DEBUG [ContainerLauncher #4] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:48,676 DEBUG [ContainerLauncher #4] = org.apache.hadoop.ipc.Client: Connecting to = hadoop3.rdpratti.com/192.168.2.252:8041=0A= 2015-02-21 19:01:48,676 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:48,677 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:48,678 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"i1QOGkfQOOJv+wWjTuduowcn74uTu5OOW708M7fn\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:48,678 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@5fbfd= d96=0A= 2015-02-21 19:01:48,678 INFO [ContainerLauncher #4] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.252:8041. Current token is Kind: NMToken, Service: = 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@2d6cd3f6)=0A= 2015-02-21 19:01:48,678 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:48,678 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:48,678 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:48,678 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:48,678 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:48,679 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"i1QOGkf= QOOJv+wWjTuduowcn74uTu5OOW708M7fn\",nc=3D00000001,cnonce=3D\"DKOnDiGiJJ9+= dEjt/SrcnLGSWwtpVtDwFnShylwc\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3Dd9c1aedf7626a711fb0c06a521c64959,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:48,681 DEBUG [ContainerLauncher #4] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3Da7f0fe051943fe30bdafeff4ce70a2df"=0A= =0A= 2015-02-21 19:01:48,681 DEBUG [ContainerLauncher #4] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:48,685 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 sending #68=0A= 2015-02-21 19:01:48,685 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 2=0A= 2015-02-21 19:01:48,688 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 got value #68=0A= 2015-02-21 19:01:48,688 DEBUG [ContainerLauncher #4] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: startContainers took 12ms=0A= 2015-02-21 19:01:48,689 INFO [ContainerLauncher #4] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Shuffle port returned by ContainerManager for = attempt_1424550134651_0002_r_000003_0 : 13562=0A= 2015-02-21 19:01:48,689 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptContainerLaunched= Event.EventType: TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:48,689 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:48,689 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 1=0A= 2015-02-21 19:01:48,689 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000003_0 of type TA_CONTAINER_LAUNCHED=0A= 2015-02-21 19:01:48,689 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = TaskAttempt: [attempt_1424550134651_0002_r_000003_0] using containerId: = [container_1424550134651_0002_01_000014 on NM: = [hadoop3.rdpratti.com:8041]=0A= 2015-02-21 19:01:48,689 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000003_0 TaskAttempt Transitioned from = ASSIGNED to RUNNING=0A= 2015-02-21 19:01:48,689 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:48,689 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:48,689 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = REDUCE_ATTEMPT_STARTED=0A= 2015-02-21 19:01:48,690 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_START=0A= 2015-02-21 19:01:48,690 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:48,690 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:48,690 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000003 of type T_ATTEMPT_LAUNCHED=0A= 2015-02-21 19:01:48,690 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_r_000003 Task Transitioned from SCHEDULED to = RUNNING=0A= 2015-02-21 19:01:48,690 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D12, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D50688=0A= 2015-02-21 19:01:48,690 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler REDUCE_ATTEMPT_STARTED=0A= 2015-02-21 19:01:48,700 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #2=0A= 2015-02-21 19:01:48,700 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: = statusUpdate(attempt_1424550134651_0002_r_000000_0, = org.apache.hadoop.mapred.ReduceTaskStatus@43525abc), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#2 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:48,700 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:48,701 INFO [IPC Server handler 6 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_r_000000_0 is : 0.0=0A= 2015-02-21 19:01:48,701 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:48,701 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:48,702 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000000_0, = org.apache.hadoop.mapred.ReduceTaskStatus@43525abc), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#2 Retry#0=0A= 2015-02-21 19:01:48,702 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000000_0 of type TA_UPDATE=0A= 2015-02-21 19:01:48,702 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000000_0, = org.apache.hadoop.mapred.ReduceTaskStatus@43525abc), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#2 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:48,702 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:49,025 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #153=0A= 2015-02-21 19:01:49,025 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#153 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:49,025 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:49,025 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:49,025 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#153 Retry#0=0A= 2015-02-21 19:01:49,025 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#153 Retry#0 Wrote 33 bytes.=0A= 2015-02-21 19:01:49,026 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #154=0A= 2015-02-21 19:01:49,027 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#154 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:49,027 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:49,027 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:49,027 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#154 Retry#0=0A= 2015-02-21 19:01:49,027 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#154 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:49,028 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #155=0A= 2015-02-21 19:01:49,029 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#155 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:49,029 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:49,029 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:49,029 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#155 Retry#0=0A= 2015-02-21 19:01:49,029 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#155 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:49,178 DEBUG [IPC Server idle connection scanner for = port 35954] org.apache.hadoop.ipc.Server: IPC Server idle connection = scanner for port 35954: task running=0A= 2015-02-21 19:01:49,377 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #3=0A= 2015-02-21 19:01:49,378 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: = statusUpdate(attempt_1424550134651_0002_r_000000_0, = org.apache.hadoop.mapred.ReduceTaskStatus@653681e7), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#3 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:49,378 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:49,378 INFO [IPC Server handler 2 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_r_000000_0 is : 0.0=0A= 2015-02-21 19:01:49,379 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:49,379 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:49,379 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000000_0, = org.apache.hadoop.mapred.ReduceTaskStatus@653681e7), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#3 Retry#0=0A= 2015-02-21 19:01:49,379 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000000_0 of type TA_UPDATE=0A= 2015-02-21 19:01:49,379 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000000_0, = org.apache.hadoop.mapred.ReduceTaskStatus@653681e7), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#3 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:49,379 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:49,625 DEBUG = [LeaseRenewer:cloudera@hadoop0.rdpratti.com:8020] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:49,626 DEBUG = [LeaseRenewer:cloudera@hadoop0.rdpratti.com:8020] = org.apache.hadoop.ipc.Client: Connecting to = hadoop0.rdpratti.com/192.168.2.253:8020=0A= 2015-02-21 19:01:49,626 DEBUG = [LeaseRenewer:cloudera@hadoop0.rdpratti.com:8020] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:49,626 DEBUG = [LeaseRenewer:cloudera@hadoop0.rdpratti.com:8020] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:49,627 DEBUG = [LeaseRenewer:cloudera@hadoop0.rdpratti.com:8020] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"8Wy0no7KQqU95XpmBPnMfwoV6QYxjYcvSQJRWsDK\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 19:01:49,627 DEBUG = [LeaseRenewer:cloudera@hadoop0.rdpratti.com:8020] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB = info:@org.apache.hadoop.security.token.TokenInfo(value=3Dclass = org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector)=0A= 2015-02-21 19:01:49,627 DEBUG = [LeaseRenewer:cloudera@hadoop0.rdpratti.com:8020] = org.apache.hadoop.security.SaslRpcClient: Use SIMPLE authentication for = protocol ClientNamenodeProtocolPB=0A= 2015-02-21 19:01:49,628 DEBUG = [LeaseRenewer:cloudera@hadoop0.rdpratti.com:8020] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 19:01:49,628 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera: starting, having = connections 2=0A= 2015-02-21 19:01:49,628 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #69=0A= 2015-02-21 19:01:49,670 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #69=0A= 2015-02-21 19:01:49,670 DEBUG = [LeaseRenewer:cloudera@hadoop0.rdpratti.com:8020] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: renewLease took 45ms=0A= 2015-02-21 19:01:49,673 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #70=0A= 2015-02-21 19:01:49,674 DEBUG = [LeaseRenewer:cloudera@hadoop0.rdpratti.com:8020] = org.apache.hadoop.hdfs.LeaseRenewer: Lease renewed for client = DFSClient_NONMAPREDUCE_-907115631_1=0A= 2015-02-21 19:01:49,674 DEBUG = [LeaseRenewer:cloudera@hadoop0.rdpratti.com:8020] = org.apache.hadoop.hdfs.LeaseRenewer: Lease renewer daemon for = [DFSClient_NONMAPREDUCE_-907115631_1] with renew id 1 executed=0A= 2015-02-21 19:01:49,675 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #70=0A= 2015-02-21 19:01:49,675 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:49,675 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: = getResources() for application_1424550134651_0002: ask=3D1 release=3D 0 = newContainers=3D0 finishedContainers=3D0 resourcelimit=3D knownNMs=3D4=0A= 2015-02-21 19:01:49,808 DEBUG [IPC Server listener on 35954] = org.apache.hadoop.ipc.Server: Server connection from = 192.168.2.250:34290; # active connections: 2; # queued calls: 0=0A= 2015-02-21 19:01:49,855 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:49,855 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: Created SASL server with = mechanism =3D DIGEST-MD5=0A= 2015-02-21 19:01:49,856 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"ieKwZwzARwPlzUepedu/Thpkqli8JxROmudPTNDR\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 19:01:49,856 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.250:34290 Call#-33 Retry#-1=0A= 2015-02-21 19:01:49,856 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.250:34290 Call#-33 Retry#-1 Wrote 178 = bytes.=0A= 2015-02-21 19:01:50,006 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:50,006 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Have read input token of size 270 for = processing by saslServer.evaluateResponse()=0A= 2015-02-21 19:01:50,007 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting password for client: job_1424550134651_0002 = (auth:SIMPLE)=0A= 2015-02-21 19:01:50,007 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting canonicalized client ID: job_1424550134651_0002=0A= 2015-02-21 19:01:50,007 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Will send SUCCESS token of size 40 from = saslServer.=0A= 2015-02-21 19:01:50,007 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server context established. = Negotiated QoP is auth=0A= 2015-02-21 19:01:50,008 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server successfully authenticated = client: job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:50,008 INFO [Socket Reader #1 for port 35954] = SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for = job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:50,008 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: SUCCESS=0A= token: "rspauth=3D4282bb07c1c979338cd9b6b43baed4dd"=0A= =0A= 2015-02-21 19:01:50,008 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.250:34290 Call#-33 Retry#-1=0A= 2015-02-21 19:01:50,008 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.250:34290 Call#-33 Retry#-1 Wrote 64 = bytes.=0A= 2015-02-21 19:01:50,031 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #156=0A= 2015-02-21 19:01:50,031 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#156 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:50,031 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:50,031 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:50,031 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#156 Retry#0=0A= 2015-02-21 19:01:50,031 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#156 Retry#0 Wrote 33 bytes.=0A= 2015-02-21 19:01:50,032 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #157=0A= 2015-02-21 19:01:50,032 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#157 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:50,032 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:50,033 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:50,033 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#157 Retry#0=0A= 2015-02-21 19:01:50,033 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#157 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:50,034 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #158=0A= 2015-02-21 19:01:50,034 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#158 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:50,034 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:50,034 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:50,034 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#158 Retry#0=0A= 2015-02-21 19:01:50,034 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#158 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:50,036 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-3=0A= 2015-02-21 19:01:50,036 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Successfully authorized userInfo {=0A= }=0A= protocol: "org.apache.hadoop.mapred.TaskUmbilicalProtocol"=0A= =0A= 2015-02-21 19:01:50,036 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #0=0A= 2015-02-21 19:01:50,036 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: = getTask(org.apache.hadoop.mapred.JvmContext@666b8eb1), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#0 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:50,037 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:50,037 INFO [IPC Server handler 4 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : = jvm_1424550134651_0002_r_000011 asked for a task=0A= 2015-02-21 19:01:50,037 INFO [IPC Server handler 4 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: = jvm_1424550134651_0002_r_000011 given task: = attempt_1424550134651_0002_r_000002_0=0A= 2015-02-21 19:01:50,037 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: Served: getTask queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:50,037 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@666b8eb1), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#0 Retry#0=0A= 2015-02-21 19:01:50,037 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@666b8eb1), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#0 Retry#0 Wrote 450 bytes.=0A= 2015-02-21 19:01:50,273 DEBUG [IPC Server listener on 35954] = org.apache.hadoop.ipc.Server: Server connection from = 192.168.2.252:43613; # active connections: 3; # queued calls: 0=0A= 2015-02-21 19:01:50,292 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:50,292 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: Created SASL server with = mechanism =3D DIGEST-MD5=0A= 2015-02-21 19:01:50,293 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"J7EyKhcEvfp3ajYdYuTrZS0m083mEPiwSwtpAiFJ\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 19:01:50,293 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43613 Call#-33 Retry#-1=0A= 2015-02-21 19:01:50,293 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43613 Call#-33 Retry#-1 Wrote 178 = bytes.=0A= 2015-02-21 19:01:50,379 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-33=0A= 2015-02-21 19:01:50,379 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Have read input token of size 270 for = processing by saslServer.evaluateResponse()=0A= 2015-02-21 19:01:50,380 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting password for client: job_1424550134651_0002 = (auth:SIMPLE)=0A= 2015-02-21 19:01:50,380 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.security.SaslRpcServer: SASL server DIGEST-MD5 = callback: setting canonicalized client ID: job_1424550134651_0002=0A= 2015-02-21 19:01:50,380 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Will send SUCCESS token of size 40 from = saslServer.=0A= 2015-02-21 19:01:50,380 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server context established. = Negotiated QoP is auth=0A= 2015-02-21 19:01:50,381 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: SASL server successfully authenticated = client: job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:50,381 INFO [Socket Reader #1 for port 35954] = SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for = job_1424550134651_0002 (auth:SIMPLE)=0A= 2015-02-21 19:01:50,381 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Sending sasl message state: SUCCESS=0A= token: "rspauth=3D23d7d12e060f3a11636c691b9883c9f9"=0A= =0A= 2015-02-21 19:01:50,381 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43613 Call#-33 Retry#-1=0A= 2015-02-21 19:01:50,381 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = responding to null from 192.168.2.252:43613 Call#-33 Retry#-1 Wrote 64 = bytes.=0A= 2015-02-21 19:01:50,394 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #-3=0A= 2015-02-21 19:01:50,395 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Successfully authorized userInfo {=0A= }=0A= protocol: "org.apache.hadoop.mapred.TaskUmbilicalProtocol"=0A= =0A= 2015-02-21 19:01:50,395 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #0=0A= 2015-02-21 19:01:50,395 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 5 on 35954: = getTask(org.apache.hadoop.mapred.JvmContext@3aeeb134), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#0 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:50,395 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:50,395 INFO [IPC Server handler 5 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : = jvm_1424550134651_0002_r_000014 asked for a task=0A= 2015-02-21 19:01:50,395 INFO [IPC Server handler 5 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: = jvm_1424550134651_0002_r_000014 given task: = attempt_1424550134651_0002_r_000003_0=0A= 2015-02-21 19:01:50,395 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.ipc.Server: Served: getTask queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:50,396 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 5 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@3aeeb134), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#0 Retry#0=0A= 2015-02-21 19:01:50,396 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 5 on 35954: responding = to getTask(org.apache.hadoop.mapred.JvmContext@3aeeb134), rpc = version=3D2, client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#0 Retry#0 Wrote 450 bytes.=0A= 2015-02-21 19:01:50,676 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #71=0A= 2015-02-21 19:01:50,678 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #71=0A= 2015-02-21 19:01:50,678 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:50,907 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #9=0A= 2015-02-21 19:01:50,908 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: = commitPending(attempt_1424550134651_0002_r_000000_0, = org.apache.hadoop.mapred.ReduceTaskStatus@13654292), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#9 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:50,908 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:50,908 INFO [IPC Server handler 8 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Commit-pending state = update from attempt_1424550134651_0002_r_000000_0=0A= 2015-02-21 19:01:50,908 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: Served: commitPending queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:50,908 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_COMMIT_PENDING=0A= 2015-02-21 19:01:50,909 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000000_0 of type TA_COMMIT_PENDING=0A= 2015-02-21 19:01:50,909 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: responding = to commitPending(attempt_1424550134651_0002_r_000000_0, = org.apache.hadoop.mapred.ReduceTaskStatus@13654292), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#9 Retry#0=0A= 2015-02-21 19:01:50,909 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000000_0 TaskAttempt Transitioned from = RUNNING to COMMIT_PENDING=0A= 2015-02-21 19:01:50,909 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_COMMIT_PENDING=0A= 2015-02-21 19:01:50,909 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: responding = to commitPending(attempt_1424550134651_0002_r_000000_0, = org.apache.hadoop.mapred.ReduceTaskStatus@13654292), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#9 Retry#0 Wrote 118 bytes.=0A= 2015-02-21 19:01:50,909 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000000 of type T_ATTEMPT_COMMIT_PENDING=0A= 2015-02-21 19:01:50,909 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = attempt_1424550134651_0002_r_000000_0 given a go for committing the task = output.=0A= 2015-02-21 19:01:50,916 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #10=0A= 2015-02-21 19:01:50,917 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 35954: = canCommit(attempt_1424550134651_0002_r_000000_0), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#10 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:50,917 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:50,917 INFO [IPC Server handler 23 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Commit go/no-go = request from attempt_1424550134651_0002_r_000000_0=0A= 2015-02-21 19:01:50,917 INFO [IPC Server handler 23 on 35954] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Result of = canCommit for attempt_1424550134651_0002_r_000000_0:true=0A= 2015-02-21 19:01:50,917 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: Served: canCommit queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:50,917 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 35954: responding = to canCommit(attempt_1424550134651_0002_r_000000_0), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#10 Retry#0=0A= 2015-02-21 19:01:50,917 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 35954: responding = to canCommit(attempt_1424550134651_0002_r_000000_0), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#10 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:51,036 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #159=0A= 2015-02-21 19:01:51,036 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#159 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:51,036 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:51,037 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 1 procesingTime=3D 0=0A= 2015-02-21 19:01:51,037 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#159 Retry#0=0A= 2015-02-21 19:01:51,037 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#159 Retry#0 Wrote 33 bytes.=0A= 2015-02-21 19:01:51,038 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #160=0A= 2015-02-21 19:01:51,038 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#160 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:51,038 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:51,038 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:51,038 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#160 Retry#0=0A= 2015-02-21 19:01:51,038 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#160 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:51,039 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #161=0A= 2015-02-21 19:01:51,039 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#161 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:51,039 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:51,040 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:51,040 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#161 Retry#0=0A= 2015-02-21 19:01:51,040 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#161 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:51,059 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #14=0A= 2015-02-21 19:01:51,059 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 35954: = statusUpdate(attempt_1424550134651_0002_r_000000_0, = org.apache.hadoop.mapred.ReduceTaskStatus@747c1c55), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#14 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:51,059 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:51,060 INFO [IPC Server handler 29 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_r_000000_0 is : 0.73241866=0A= 2015-02-21 19:01:51,062 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 0 = procesingTime=3D 3=0A= 2015-02-21 19:01:51,062 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:51,062 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000000_0, = org.apache.hadoop.mapred.ReduceTaskStatus@747c1c55), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#14 Retry#0=0A= 2015-02-21 19:01:51,062 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000000_0 of type TA_UPDATE=0A= 2015-02-21 19:01:51,062 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000000_0, = org.apache.hadoop.mapred.ReduceTaskStatus@747c1c55), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.251:58639 Call#14 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:51,062 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:51,063 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #15=0A= 2015-02-21 19:01:51,064 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 35954: = done(attempt_1424550134651_0002_r_000000_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.251:58639 = Call#15 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:51,064 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:51,064 INFO [IPC Server handler 28 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement = from attempt_1424550134651_0002_r_000000_0=0A= 2015-02-21 19:01:51,064 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: Served: done queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:51,064 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_DONE=0A= 2015-02-21 19:01:51,064 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 35954: responding = to done(attempt_1424550134651_0002_r_000000_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.251:58639 = Call#15 Retry#0=0A= 2015-02-21 19:01:51,064 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000000_0 of type TA_DONE=0A= 2015-02-21 19:01:51,064 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 35954: responding = to done(attempt_1424550134651_0002_r_000000_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.251:58639 = Call#15 Retry#0 Wrote 118 bytes.=0A= 2015-02-21 19:01:51,064 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000000_0 TaskAttempt Transitioned from = COMMIT_PENDING to SUCCESS_CONTAINER_CLEANUP=0A= 2015-02-21 19:01:51,064 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherEvent.EventT= ype: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000009 taskAttempt = attempt_1424550134651_0002_r_000000_0=0A= 2015-02-21 19:01:51,065 INFO [ContainerLauncher #5] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000009 taskAttempt = attempt_1424550134651_0002_r_000000_0=0A= 2015-02-21 19:01:51,065 INFO [ContainerLauncher #5] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = KILLING attempt_1424550134651_0002_r_000000_0=0A= 2015-02-21 19:01:51,065 INFO [ContainerLauncher #5] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop2.rdpratti.com:8041=0A= 2015-02-21 19:01:51,065 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.251:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@6b327c17)=0A= 2015-02-21 19:01:51,065 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:51,065 DEBUG [ContainerLauncher #5] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:51,066 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = disconnecting client 192.168.2.251:58639. Number of active connections: 2=0A= 2015-02-21 19:01:51,066 DEBUG [ContainerLauncher #5] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:51,067 DEBUG [ContainerLauncher #5] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:51,067 DEBUG [ContainerLauncher #5] = org.apache.hadoop.ipc.Client: Connecting to = hadoop2.rdpratti.com/192.168.2.251:8041=0A= 2015-02-21 19:01:51,067 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:51,068 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:51,069 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"GmTHMRU19HculUXAKisV8tt14WWx5oyx3XpIVi1F\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:51,069 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@59bd9= 5a1=0A= 2015-02-21 19:01:51,069 INFO [ContainerLauncher #5] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.251:8041. Current token is Kind: NMToken, Service: = 192.168.2.251:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@c21f7ed)=0A= 2015-02-21 19:01:51,069 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:51,070 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:51,070 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMi5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:51,070 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:51,070 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:51,070 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMi5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"GmTHMRU= 19HculUXAKisV8tt14WWx5oyx3XpIVi1F\",nc=3D00000001,cnonce=3D\"8hrRxRb5jfA3= +kK4PG0n7PoxGT06cBzvkeFhzFQ+\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3Df080474356418db6805a14a8d6b27af2,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:51,074 DEBUG [ContainerLauncher #5] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D8d71bb25eed99c3a45e0a095323f0970"=0A= =0A= 2015-02-21 19:01:51,075 DEBUG [ContainerLauncher #5] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:51,075 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 3=0A= 2015-02-21 19:01:51,075 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001 sending #72=0A= 2015-02-21 19:01:51,083 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001 got value #72=0A= 2015-02-21 19:01:51,083 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:51,083 DEBUG [IPC Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop2.rdpratti.com/192.168.2.251:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 2=0A= 2015-02-21 19:01:51,083 DEBUG [ContainerLauncher #5] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: stopContainers took 16ms=0A= 2015-02-21 19:01:51,083 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:51,084 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000000_0 of type TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:51,084 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000000_0 TaskAttempt Transitioned from = SUCCESS_CONTAINER_CLEANUP to SUCCEEDED=0A= 2015-02-21 19:01:51,084 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:51,084 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:51,084 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = REDUCE_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:51,084 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:51,084 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:51,084 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000000 of type T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:51,084 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded = with attempt attempt_1424550134651_0002_r_000000_0=0A= 2015-02-21 19:01:51,084 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_r_000000 Task Transitioned from RUNNING to = SUCCEEDED=0A= 2015-02-21 19:01:51,084 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:51,084 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskAttemptCompletedEvent= .EventType: JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:51,084 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:51,085 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskEvent.EventType: = JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:51,085 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:51,085 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed = Tasks: 7=0A= 2015-02-21 19:01:51,085 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_FINISHED=0A= 2015-02-21 19:01:51,087 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:51,087 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 55033 lastFlushOffset 50882=0A= 2015-02-21 19:01:51,087 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 12=0A= 2015-02-21 19:01:51,087 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 12=0A= 2015-02-21 19:01:51,087 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:12 offsetInBlock:50688 lastPacketInBlock:false = lastByteOffsetInBlock: 55033=0A= 2015-02-21 19:01:51,090 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 12 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1758136=0A= 2015-02-21 19:01:51,090 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler REDUCE_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:51,090 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:51,092 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D13, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D54784=0A= 2015-02-21 19:01:51,092 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:51,092 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 58265 lastFlushOffset 55033=0A= 2015-02-21 19:01:51,092 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 13=0A= 2015-02-21 19:01:51,092 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 13=0A= 2015-02-21 19:01:51,092 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:13 offsetInBlock:54784 lastPacketInBlock:false = lastByteOffsetInBlock: 58265=0A= 2015-02-21 19:01:51,095 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 13 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1956877=0A= 2015-02-21 19:01:51,095 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_FINISHED=0A= 2015-02-21 19:01:51,678 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before = Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 = AssignedReds:3 CompletedMaps:5 CompletedReds:2 ContAlloc:9 ContRel:0 = HostLocal:2 RackLocal:3=0A= 2015-02-21 19:01:51,679 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #73=0A= 2015-02-21 19:01:51,680 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #73=0A= 2015-02-21 19:01:51,680 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:52,033 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #1=0A= 2015-02-21 19:01:52,033 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: = getMapCompletionEvents(job_1424550134651_0002, 0, 10000, = attempt_1424550134651_0002_r_000003_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43613 = Call#1 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:52,033 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:52,033 INFO [IPC Server handler 4 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: MapCompletionEvents = request from attempt_1424550134651_0002_r_000003_0. startIndex 0 = maxEvents 10000=0A= 2015-02-21 19:01:52,034 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: Served: getMapCompletionEvents = queueTime=3D 0 procesingTime=3D 1=0A= 2015-02-21 19:01:52,034 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: responding = to getMapCompletionEvents(job_1424550134651_0002, 0, 10000, = attempt_1424550134651_0002_r_000003_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43613 = Call#1 Retry#0=0A= 2015-02-21 19:01:52,034 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: responding = to getMapCompletionEvents(job_1424550134651_0002, 0, 10000, = attempt_1424550134651_0002_r_000003_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43613 = Call#1 Retry#0 Wrote 553 bytes.=0A= 2015-02-21 19:01:52,042 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #162=0A= 2015-02-21 19:01:52,042 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#162 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:52,043 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:52,043 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 1 procesingTime=3D 0=0A= 2015-02-21 19:01:52,043 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#162 Retry#0=0A= 2015-02-21 19:01:52,043 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#162 Retry#0 Wrote 102 bytes.=0A= 2015-02-21 19:01:52,044 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #163=0A= 2015-02-21 19:01:52,044 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#163 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:52,045 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:52,045 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:52,045 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#163 Retry#0=0A= 2015-02-21 19:01:52,045 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#163 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:52,046 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #164=0A= 2015-02-21 19:01:52,046 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#164 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:52,046 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:52,047 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:52,047 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#164 Retry#0=0A= 2015-02-21 19:01:52,047 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#164 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:52,210 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #2=0A= 2015-02-21 19:01:52,211 DEBUG [IPC Server handler 20 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 20 on 35954: = statusUpdate(attempt_1424550134651_0002_r_000003_0, = org.apache.hadoop.mapred.ReduceTaskStatus@5680ae64), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#2 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:52,211 DEBUG [IPC Server handler 20 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:52,211 INFO [IPC Server handler 20 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_r_000003_0 is : 0.0=0A= 2015-02-21 19:01:52,212 DEBUG [IPC Server handler 20 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:52,212 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:52,213 DEBUG [IPC Server handler 20 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 20 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000003_0, = org.apache.hadoop.mapred.ReduceTaskStatus@5680ae64), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#2 Retry#0=0A= 2015-02-21 19:01:52,213 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000003_0 of type TA_UPDATE=0A= 2015-02-21 19:01:52,213 DEBUG [IPC Server handler 20 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 20 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000003_0, = org.apache.hadoop.mapred.ReduceTaskStatus@5680ae64), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#2 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:52,213 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:52,681 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #74=0A= 2015-02-21 19:01:52,683 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #74=0A= 2015-02-21 19:01:52,683 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:52,683 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = headroom=3D=0A= 2015-02-21 19:01:52,683 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received = completed container container_1424550134651_0002_01_000009=0A= 2015-02-21 19:01:52,683 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:52,683 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After = Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 = AssignedReds:2 CompletedMaps:5 CompletedReds:2 ContAlloc:9 ContRel:0 = HostLocal:2 RackLocal:3=0A= 2015-02-21 19:01:52,683 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000000_0 of type TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:52,683 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptDiagnosticsUpdate= Event.EventType: TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:52,683 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000000_0 of type TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:52,683 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics = report from attempt_1424550134651_0002_r_000000_0: Container killed by = the ApplicationMaster.=0A= Container killed on request. Exit code is 143=0A= Container exited with a non-zero exit code 143=0A= =0A= 2015-02-21 19:01:52,823 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #3=0A= 2015-02-21 19:01:52,824 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: = statusUpdate(attempt_1424550134651_0002_r_000003_0, = org.apache.hadoop.mapred.ReduceTaskStatus@64d7dd22), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#3 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:52,824 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:52,824 INFO [IPC Server handler 8 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_r_000003_0 is : 0.0=0A= 2015-02-21 19:01:52,825 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 1 = procesingTime=3D 1=0A= 2015-02-21 19:01:52,825 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:52,825 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000003_0, = org.apache.hadoop.mapred.ReduceTaskStatus@64d7dd22), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#3 Retry#0=0A= 2015-02-21 19:01:52,825 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000003_0 of type TA_UPDATE=0A= 2015-02-21 19:01:52,825 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000003_0, = org.apache.hadoop.mapred.ReduceTaskStatus@64d7dd22), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#3 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:52,825 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:53,048 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #165=0A= 2015-02-21 19:01:53,048 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#165 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:53,048 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:53,049 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 1 procesingTime=3D 0=0A= 2015-02-21 19:01:53,049 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#165 Retry#0=0A= 2015-02-21 19:01:53,049 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#165 Retry#0 Wrote 33 bytes.=0A= 2015-02-21 19:01:53,050 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #166=0A= 2015-02-21 19:01:53,050 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#166 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:53,050 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:53,050 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:53,051 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#166 Retry#0=0A= 2015-02-21 19:01:53,051 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#166 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:53,052 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #167=0A= 2015-02-21 19:01:53,052 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#167 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:53,052 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:53,052 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:53,052 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#167 Retry#0=0A= 2015-02-21 19:01:53,053 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#167 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:53,567 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #1=0A= 2015-02-21 19:01:53,567 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: = getMapCompletionEvents(job_1424550134651_0002, 0, 10000, = attempt_1424550134651_0002_r_000002_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34290 = Call#1 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:53,567 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:53,567 INFO [IPC Server handler 6 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: MapCompletionEvents = request from attempt_1424550134651_0002_r_000002_0. startIndex 0 = maxEvents 10000=0A= 2015-02-21 19:01:53,568 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: Served: getMapCompletionEvents = queueTime=3D 0 procesingTime=3D 1=0A= 2015-02-21 19:01:53,568 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: responding = to getMapCompletionEvents(job_1424550134651_0002, 0, 10000, = attempt_1424550134651_0002_r_000002_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34290 = Call#1 Retry#0=0A= 2015-02-21 19:01:53,568 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: responding = to getMapCompletionEvents(job_1424550134651_0002, 0, 10000, = attempt_1424550134651_0002_r_000002_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34290 = Call#1 Retry#0 Wrote 553 bytes.=0A= 2015-02-21 19:01:53,671 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #9=0A= 2015-02-21 19:01:53,671 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: = commitPending(attempt_1424550134651_0002_r_000003_0, = org.apache.hadoop.mapred.ReduceTaskStatus@350fec29), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#9 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:53,671 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:53,672 INFO [IPC Server handler 8 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Commit-pending state = update from attempt_1424550134651_0002_r_000003_0=0A= 2015-02-21 19:01:53,672 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: Served: commitPending queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:53,672 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_COMMIT_PENDING=0A= 2015-02-21 19:01:53,672 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: responding = to commitPending(attempt_1424550134651_0002_r_000003_0, = org.apache.hadoop.mapred.ReduceTaskStatus@350fec29), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#9 Retry#0=0A= 2015-02-21 19:01:53,672 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000003_0 of type TA_COMMIT_PENDING=0A= 2015-02-21 19:01:53,672 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: responding = to commitPending(attempt_1424550134651_0002_r_000003_0, = org.apache.hadoop.mapred.ReduceTaskStatus@350fec29), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#9 Retry#0 Wrote 118 bytes.=0A= 2015-02-21 19:01:53,672 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000003_0 TaskAttempt Transitioned from = RUNNING to COMMIT_PENDING=0A= 2015-02-21 19:01:53,672 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_COMMIT_PENDING=0A= 2015-02-21 19:01:53,672 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000003 of type T_ATTEMPT_COMMIT_PENDING=0A= 2015-02-21 19:01:53,673 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = attempt_1424550134651_0002_r_000003_0 given a go for committing the task = output.=0A= 2015-02-21 19:01:53,680 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #10=0A= 2015-02-21 19:01:53,680 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 35954: = canCommit(attempt_1424550134651_0002_r_000003_0), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#10 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:53,680 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:53,680 INFO [IPC Server handler 23 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Commit go/no-go = request from attempt_1424550134651_0002_r_000003_0=0A= 2015-02-21 19:01:53,680 INFO [IPC Server handler 23 on 35954] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Result of = canCommit for attempt_1424550134651_0002_r_000003_0:true=0A= 2015-02-21 19:01:53,680 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: Served: canCommit queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:53,680 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 35954: responding = to canCommit(attempt_1424550134651_0002_r_000003_0), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#10 Retry#0=0A= 2015-02-21 19:01:53,680 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 35954: responding = to canCommit(attempt_1424550134651_0002_r_000003_0), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#10 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:53,684 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #75=0A= 2015-02-21 19:01:53,685 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #75=0A= 2015-02-21 19:01:53,685 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 1ms=0A= 2015-02-21 19:01:53,748 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #14=0A= 2015-02-21 19:01:53,749 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 35954: = statusUpdate(attempt_1424550134651_0002_r_000003_0, = org.apache.hadoop.mapred.ReduceTaskStatus@2e8c06be), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#14 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:53,749 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:53,749 INFO [IPC Server handler 11 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_r_000003_0 is : 0.7381348=0A= 2015-02-21 19:01:53,751 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 0 = procesingTime=3D 2=0A= 2015-02-21 19:01:53,751 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:53,751 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000003_0, = org.apache.hadoop.mapred.ReduceTaskStatus@2e8c06be), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#14 Retry#0=0A= 2015-02-21 19:01:53,751 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000003_0 of type TA_UPDATE=0A= 2015-02-21 19:01:53,751 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000003_0, = org.apache.hadoop.mapred.ReduceTaskStatus@2e8c06be), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.252:43613 Call#14 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:53,751 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:53,752 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #15=0A= 2015-02-21 19:01:53,752 DEBUG [IPC Server handler 26 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 26 on 35954: = done(attempt_1424550134651_0002_r_000003_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43613 = Call#15 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:53,752 DEBUG [IPC Server handler 26 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:53,752 INFO [IPC Server handler 26 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement = from attempt_1424550134651_0002_r_000003_0=0A= 2015-02-21 19:01:53,752 DEBUG [IPC Server handler 26 on 35954] = org.apache.hadoop.ipc.Server: Served: done queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:53,753 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_DONE=0A= 2015-02-21 19:01:53,753 DEBUG [IPC Server handler 26 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 26 on 35954: responding = to done(attempt_1424550134651_0002_r_000003_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43613 = Call#15 Retry#0=0A= 2015-02-21 19:01:53,753 DEBUG [IPC Server handler 26 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 26 on 35954: responding = to done(attempt_1424550134651_0002_r_000003_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.252:43613 = Call#15 Retry#0 Wrote 118 bytes.=0A= 2015-02-21 19:01:53,753 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000003_0 of type TA_DONE=0A= 2015-02-21 19:01:53,753 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000003_0 TaskAttempt Transitioned from = COMMIT_PENDING to SUCCESS_CONTAINER_CLEANUP=0A= 2015-02-21 19:01:53,753 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherEvent.EventT= ype: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000014 taskAttempt = attempt_1424550134651_0002_r_000003_0=0A= 2015-02-21 19:01:53,753 INFO [ContainerLauncher #6] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000014 taskAttempt = attempt_1424550134651_0002_r_000003_0=0A= 2015-02-21 19:01:53,753 INFO [ContainerLauncher #6] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = KILLING attempt_1424550134651_0002_r_000003_0=0A= 2015-02-21 19:01:53,754 INFO [ContainerLauncher #6] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop3.rdpratti.com:8041=0A= 2015-02-21 19:01:53,754 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = disconnecting client 192.168.2.252:43613. Number of active connections: 1=0A= 2015-02-21 19:01:53,754 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@117a26fe)=0A= 2015-02-21 19:01:53,754 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:53,754 DEBUG [ContainerLauncher #6] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:53,755 DEBUG [ContainerLauncher #6] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:53,755 DEBUG [ContainerLauncher #6] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:53,755 DEBUG [ContainerLauncher #6] = org.apache.hadoop.ipc.Client: Connecting to = hadoop3.rdpratti.com/192.168.2.252:8041=0A= 2015-02-21 19:01:53,755 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:53,756 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:53,757 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"wm1kf39zCXLUvIRhr5IKGXVuvEHmH68IQjAoVK5j\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:53,757 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@29d50= d84=0A= 2015-02-21 19:01:53,757 INFO [ContainerLauncher #6] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.252:8041. Current token is Kind: NMToken, Service: = 192.168.2.252:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@626a6a90)=0A= 2015-02-21 19:01:53,757 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:53,757 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:53,757 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:53,757 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:53,757 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:53,758 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMy5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"wm1kf39= zCXLUvIRhr5IKGXVuvEHmH68IQjAoVK5j\",nc=3D00000001,cnonce=3D\"TEq4B75APYw9= Uz4Jsoj008rkZ6UPLVhKOcj8KXdv\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3D358ca9f74244b84a4c91f6502ff88782,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:53,759 DEBUG [ContainerLauncher #6] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D5740fb27bb662409542739ee562b5e69"=0A= =0A= 2015-02-21 19:01:53,760 DEBUG [ContainerLauncher #6] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:53,760 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 3=0A= 2015-02-21 19:01:53,760 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 sending #76=0A= 2015-02-21 19:01:53,765 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001 got value #76=0A= 2015-02-21 19:01:53,765 DEBUG [ContainerLauncher #6] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: stopContainers took 10ms=0A= 2015-02-21 19:01:53,765 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:53,765 DEBUG [IPC Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop3.rdpratti.com/192.168.2.252:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 2=0A= 2015-02-21 19:01:53,767 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:53,767 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000003_0 of type TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:53,767 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000003_0 TaskAttempt Transitioned from = SUCCESS_CONTAINER_CLEANUP to SUCCEEDED=0A= 2015-02-21 19:01:53,767 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:53,768 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:53,768 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = REDUCE_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:53,768 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:53,768 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:53,768 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000003 of type T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:53,768 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded = with attempt attempt_1424550134651_0002_r_000003_0=0A= 2015-02-21 19:01:53,768 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_r_000003 Task Transitioned from RUNNING to = SUCCEEDED=0A= 2015-02-21 19:01:53,768 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:53,768 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskAttemptCompletedEvent= .EventType: JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:53,768 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:53,768 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskEvent.EventType: = JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:53,768 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:53,768 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed = Tasks: 8=0A= 2015-02-21 19:01:53,768 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_FINISHED=0A= 2015-02-21 19:01:53,771 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D14, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D57856=0A= 2015-02-21 19:01:53,771 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:53,771 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 61971 lastFlushOffset 58265=0A= 2015-02-21 19:01:53,771 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 14=0A= 2015-02-21 19:01:53,771 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 14=0A= 2015-02-21 19:01:53,771 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:14 offsetInBlock:57856 lastPacketInBlock:false = lastByteOffsetInBlock: 61971=0A= 2015-02-21 19:01:53,773 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 14 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1700308=0A= 2015-02-21 19:01:53,773 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler REDUCE_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:53,773 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:53,776 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D15, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D61952=0A= 2015-02-21 19:01:53,776 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:53,776 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 65201 lastFlushOffset 61971=0A= 2015-02-21 19:01:53,776 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 15=0A= 2015-02-21 19:01:53,776 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 15=0A= 2015-02-21 19:01:53,776 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:15 offsetInBlock:61952 lastPacketInBlock:false = lastByteOffsetInBlock: 65201=0A= 2015-02-21 19:01:53,778 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 15 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1557928=0A= 2015-02-21 19:01:53,778 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_FINISHED=0A= 2015-02-21 19:01:53,949 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #2=0A= 2015-02-21 19:01:53,950 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: = statusUpdate(attempt_1424550134651_0002_r_000002_0, = org.apache.hadoop.mapred.ReduceTaskStatus@53e95809), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#2 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:53,950 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:53,950 INFO [IPC Server handler 4 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_r_000002_0 is : 0.0=0A= 2015-02-21 19:01:53,950 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:53,950 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:53,951 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000002_0, = org.apache.hadoop.mapred.ReduceTaskStatus@53e95809), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#2 Retry#0=0A= 2015-02-21 19:01:53,951 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000002_0, = org.apache.hadoop.mapred.ReduceTaskStatus@53e95809), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#2 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:53,951 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000002_0 of type TA_UPDATE=0A= 2015-02-21 19:01:53,952 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:54,054 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #168=0A= 2015-02-21 19:01:54,054 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#168 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:54,054 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:54,055 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 1=0A= 2015-02-21 19:01:54,055 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#168 Retry#0=0A= 2015-02-21 19:01:54,055 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#168 Retry#0 Wrote 102 bytes.=0A= 2015-02-21 19:01:54,056 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #169=0A= 2015-02-21 19:01:54,056 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#169 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:54,056 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:54,056 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:54,056 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#169 Retry#0=0A= 2015-02-21 19:01:54,056 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#169 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:54,057 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #170=0A= 2015-02-21 19:01:54,057 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#170 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:54,058 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:54,058 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:54,058 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#170 Retry#0=0A= 2015-02-21 19:01:54,058 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#170 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:54,515 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: REQUEST = /ws/v1/mapreduce/jobs/job_1424550134651_0002 on = org.mortbay.jetty.HttpConnection@1cc6e47a=0A= 2015-02-21 19:01:54,515 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = sessionManager=3Dorg.mortbay.jetty.servlet.HashSessionManager@4befbfaf=0A= 2015-02-21 19:01:54,515 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = session=3Dnull=0A= 2015-02-21 19:01:54,515 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = servlet=3Ddefault=0A= 2015-02-21 19:01:54,515 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: = chain=3DNoCacheFilter->safety->AM_PROXY_FILTER->guice->default=0A= 2015-02-21 19:01:54,515 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: servlet = holder=3Ddefault=0A= 2015-02-21 19:01:54,515 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter NoCacheFilter=0A= 2015-02-21 19:01:54,516 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter safety=0A= 2015-02-21 19:01:54,516 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter AM_PROXY_FILTER=0A= 2015-02-21 19:01:54,516 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] = org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter: Remote = address for request is: 192.168.2.253=0A= 2015-02-21 19:01:54,516 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] = org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter: proxy = address is: 192.168.2.253=0A= 2015-02-21 19:01:54,516 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: call = filter guice=0A= 2015-02-21 19:01:54,519 DEBUG [970736822@qtp-700266387-0 - = /ws/v1/mapreduce/jobs/job_1424550134651_0002] org.mortbay.log: RESPONSE = /ws/v1/mapreduce/jobs/job_1424550134651_0002 200=0A= 2015-02-21 19:01:54,519 DEBUG [970736822@qtp-700266387-0] = org.mortbay.log: EOF=0A= 2015-02-21 19:01:54,590 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #3=0A= 2015-02-21 19:01:54,590 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: = statusUpdate(attempt_1424550134651_0002_r_000002_0, = org.apache.hadoop.mapred.ReduceTaskStatus@7c0a1aa1), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#3 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:54,590 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:54,591 INFO [IPC Server handler 8 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_r_000002_0 is : 0.0=0A= 2015-02-21 19:01:54,591 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:54,591 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:54,592 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000002_0, = org.apache.hadoop.mapred.ReduceTaskStatus@7c0a1aa1), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#3 Retry#0=0A= 2015-02-21 19:01:54,592 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000002_0 of type TA_UPDATE=0A= 2015-02-21 19:01:54,592 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000002_0, = org.apache.hadoop.mapred.ReduceTaskStatus@7c0a1aa1), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#3 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:54,592 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:54,685 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before = Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 = AssignedReds:2 CompletedMaps:5 CompletedReds:3 ContAlloc:9 ContRel:0 = HostLocal:2 RackLocal:3=0A= 2015-02-21 19:01:54,686 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #77=0A= 2015-02-21 19:01:54,688 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #77=0A= 2015-02-21 19:01:54,688 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:55,060 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #171=0A= 2015-02-21 19:01:55,060 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#171 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:55,060 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:55,060 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:55,060 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#171 Retry#0=0A= 2015-02-21 19:01:55,060 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#171 Retry#0 Wrote 33 bytes.=0A= 2015-02-21 19:01:55,061 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #172=0A= 2015-02-21 19:01:55,061 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#172 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:55,061 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:55,062 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:55,062 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#172 Retry#0=0A= 2015-02-21 19:01:55,062 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#172 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:55,063 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #173=0A= 2015-02-21 19:01:55,063 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#173 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:55,063 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:55,063 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:55,063 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#173 Retry#0=0A= 2015-02-21 19:01:55,063 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#173 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:55,689 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #78=0A= 2015-02-21 19:01:55,690 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #78=0A= 2015-02-21 19:01:55,691 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms=0A= 2015-02-21 19:01:55,691 DEBUG [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = headroom=3D=0A= 2015-02-21 19:01:55,691 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received = completed container container_1424550134651_0002_01_000014=0A= 2015-02-21 19:01:55,691 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:55,691 INFO [RMCommunicator Allocator] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After = Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 = AssignedReds:1 CompletedMaps:5 CompletedReds:3 ContAlloc:9 ContRel:0 = HostLocal:2 RackLocal:3=0A= 2015-02-21 19:01:55,691 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000003_0 of type TA_CONTAINER_COMPLETED=0A= 2015-02-21 19:01:55,691 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptDiagnosticsUpdate= Event.EventType: TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:55,691 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000003_0 of type TA_DIAGNOSTICS_UPDATE=0A= 2015-02-21 19:01:55,691 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics = report from attempt_1424550134651_0002_r_000003_0: Container killed by = the ApplicationMaster.=0A= Container killed on request. Exit code is 143=0A= Container exited with a non-zero exit code 143=0A= =0A= 2015-02-21 19:01:55,944 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #9=0A= 2015-02-21 19:01:55,944 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: = commitPending(attempt_1424550134651_0002_r_000002_0, = org.apache.hadoop.mapred.ReduceTaskStatus@5188dc4c), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#9 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:55,944 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:55,944 INFO [IPC Server handler 4 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Commit-pending state = update from attempt_1424550134651_0002_r_000002_0=0A= 2015-02-21 19:01:55,945 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: Served: commitPending queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:55,945 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_COMMIT_PENDING=0A= 2015-02-21 19:01:55,945 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000002_0 of type TA_COMMIT_PENDING=0A= 2015-02-21 19:01:55,945 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: responding = to commitPending(attempt_1424550134651_0002_r_000002_0, = org.apache.hadoop.mapred.ReduceTaskStatus@5188dc4c), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#9 Retry#0=0A= 2015-02-21 19:01:55,945 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000002_0 TaskAttempt Transitioned from = RUNNING to COMMIT_PENDING=0A= 2015-02-21 19:01:55,945 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_COMMIT_PENDING=0A= 2015-02-21 19:01:55,945 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: responding = to commitPending(attempt_1424550134651_0002_r_000002_0, = org.apache.hadoop.mapred.ReduceTaskStatus@5188dc4c), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#9 Retry#0 Wrote 118 bytes.=0A= 2015-02-21 19:01:55,945 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000002 of type T_ATTEMPT_COMMIT_PENDING=0A= 2015-02-21 19:01:55,945 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = attempt_1424550134651_0002_r_000002_0 given a go for committing the task = output.=0A= 2015-02-21 19:01:55,947 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #10=0A= 2015-02-21 19:01:55,947 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 35954: = canCommit(attempt_1424550134651_0002_r_000002_0), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#10 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:55,947 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:55,947 INFO [IPC Server handler 29 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Commit go/no-go = request from attempt_1424550134651_0002_r_000002_0=0A= 2015-02-21 19:01:55,948 INFO [IPC Server handler 29 on 35954] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Result of = canCommit for attempt_1424550134651_0002_r_000002_0:true=0A= 2015-02-21 19:01:55,948 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: Served: canCommit queueTime=3D 0 = procesingTime=3D 1=0A= 2015-02-21 19:01:55,948 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 35954: responding = to canCommit(attempt_1424550134651_0002_r_000002_0), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#10 Retry#0=0A= 2015-02-21 19:01:55,948 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 35954: responding = to canCommit(attempt_1424550134651_0002_r_000002_0), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#10 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:56,060 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #14=0A= 2015-02-21 19:01:56,061 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 35954: = statusUpdate(attempt_1424550134651_0002_r_000002_0, = org.apache.hadoop.mapred.ReduceTaskStatus@4df18a3), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#14 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:56,061 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:56,061 INFO [IPC Server handler 28 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of = TaskAttempt attempt_1424550134651_0002_r_000002_0 is : 0.73664665=0A= 2015-02-21 19:01:56,063 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: Served: statusUpdate queueTime=3D 0 = procesingTime=3D 2=0A= 2015-02-21 19:01:56,063 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent= .EventType: TA_UPDATE=0A= 2015-02-21 19:01:56,063 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000002_0, = org.apache.hadoop.mapred.ReduceTaskStatus@4df18a3), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#14 Retry#0=0A= 2015-02-21 19:01:56,063 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000002_0 of type TA_UPDATE=0A= 2015-02-21 19:01:56,063 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 35954: responding = to statusUpdate(attempt_1424550134651_0002_r_000002_0, = org.apache.hadoop.mapred.ReduceTaskStatus@4df18a3), rpc version=3D2, = client version=3D19, methodsFingerPrint=3D937413979 from = 192.168.2.250:34290 Call#14 Retry#0 Wrote 41 bytes.=0A= 2015-02-21 19:01:56,063 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:56,065 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: got #15=0A= 2015-02-21 19:01:56,065 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #174=0A= 2015-02-21 19:01:56,065 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#174 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:56,065 DEBUG [IPC Server handler 20 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 20 on 35954: = done(attempt_1424550134651_0002_r_000002_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34290 = Call#15 Retry#0 for RpcKind RPC_WRITABLE=0A= 2015-02-21 19:01:56,065 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:56,065 DEBUG [IPC Server handler 20 on 35954] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:job_1424550134651_0002 (auth:TOKEN) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:56,065 INFO [IPC Server handler 20 on 35954] = org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement = from attempt_1424550134651_0002_r_000002_0=0A= 2015-02-21 19:01:56,065 DEBUG [IPC Server handler 20 on 35954] = org.apache.hadoop.ipc.Server: Served: done queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:56,065 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 0 procesingTime=3D 0=0A= 2015-02-21 19:01:56,065 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#174 Retry#0=0A= 2015-02-21 19:01:56,065 DEBUG [IPC Server handler 20 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 20 on 35954: responding = to done(attempt_1424550134651_0002_r_000002_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34290 = Call#15 Retry#0=0A= 2015-02-21 19:01:56,065 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#174 Retry#0 Wrote 33 bytes.=0A= 2015-02-21 19:01:56,065 DEBUG [IPC Server handler 20 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 20 on 35954: responding = to done(attempt_1424550134651_0002_r_000002_0), rpc version=3D2, client = version=3D19, methodsFingerPrint=3D937413979 from 192.168.2.250:34290 = Call#15 Retry#0 Wrote 118 bytes.=0A= 2015-02-21 19:01:56,066 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_DONE=0A= 2015-02-21 19:01:56,066 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000002_0 of type TA_DONE=0A= 2015-02-21 19:01:56,066 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000002_0 TaskAttempt Transitioned from = COMMIT_PENDING to SUCCESS_CONTAINER_CLEANUP=0A= 2015-02-21 19:01:56,066 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherEvent.EventT= ype: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000011 taskAttempt = attempt_1424550134651_0002_r_000002_0=0A= 2015-02-21 19:01:56,066 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #175=0A= 2015-02-21 19:01:56,067 INFO [ContainerLauncher #7] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container = container_1424550134651_0002_01_000011 taskAttempt = attempt_1424550134651_0002_r_000002_0=0A= 2015-02-21 19:01:56,067 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#175 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:56,067 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:56,067 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:56,067 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#175 Retry#0=0A= 2015-02-21 19:01:56,067 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#175 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:56,068 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #176=0A= 2015-02-21 19:01:56,069 INFO [ContainerLauncher #7] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: = KILLING attempt_1424550134651_0002_r_000002_0=0A= 2015-02-21 19:01:56,069 DEBUG [Socket Reader #1 for port 35954] = org.apache.hadoop.ipc.Server: Socket Reader #1 for port 35954: = disconnecting client 192.168.2.250:34290. Number of active connections: 0=0A= 2015-02-21 19:01:56,069 INFO [ContainerLauncher #7] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = Opening proxy : hadoop1.rdpratti.com:8041=0A= 2015-02-21 19:01:56,069 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SecurityUtil: Acquired token Kind: NMToken, = Service: 192.168.2.250:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@5f55253e)=0A= 2015-02-21 19:01:56,070 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.yarn.client.ServerProxy.createRetriableProxy(Serve= rProxy.java:88)=0A= 2015-02-21 19:01:56,070 DEBUG [ContainerLauncher #7] = org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC: Creating a = HadoopYarnProtoRpc proxy for protocol interface = org.apache.hadoop.yarn.api.ContainerManagementProtocol=0A= 2015-02-21 19:01:56,070 DEBUG [ContainerLauncher #7] = org.apache.hadoop.ipc.Client: getting client out of cache: = org.apache.hadoop.ipc.Client@27c8bfa4=0A= 2015-02-21 19:01:56,070 DEBUG [ContainerLauncher #7] = org.apache.hadoop.ipc.Client: The ping interval is 60000 ms.=0A= 2015-02-21 19:01:56,071 DEBUG [ContainerLauncher #7] = org.apache.hadoop.ipc.Client: Connecting to = hadoop1.rdpratti.com/192.168.2.250:8041=0A= 2015-02-21 19:01:56,071 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#176 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:56,071 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:56,071 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:appattempt_1424550134651_0002_000001 (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:7= 12)=0A= 2015-02-21 19:01:56,071 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 2 = procesingTime=3D 0=0A= 2015-02-21 19:01:56,072 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:56,072 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#176 Retry#0=0A= 2015-02-21 19:01:56,072 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#176 Retry#0 Wrote 281 bytes.=0A= 2015-02-21 19:01:56,073 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"5IKHDJ+cIM8L1TWOFB0V+YcEVOY9NMTn4yap3bXQ\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 19:01:56,073 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ContainerManagementProtocolPB = info:org.apache.hadoop.yarn.security.ContainerManagerSecurityInfo$1@388f9= f85=0A= 2015-02-21 19:01:56,073 INFO [ContainerLauncher #7] = org.apache.hadoop.yarn.security.NMTokenSelector: Looking for service: = 192.168.2.250:8041. Current token is Kind: NMToken, Service: = 192.168.2.250:8041, Ident: = (org.apache.hadoop.yarn.security.NMTokenIdentifier@5d31f1c5)=0A= 2015-02-21 19:01:56,073 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 19:01:56,073 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ContainerManagementProtocolPB=0A= 2015-02-21 19:01:56,073 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: = AAABS63OA3sAAAACAAAAAQAZaGFkb29wMS5yZHByYXR0aS5jb206ODA0MQAIY2xvdWRlcmFB0= 0Yo=0A= 2015-02-21 19:01:56,073 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 19:01:56,073 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 19:01:56,074 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAACAAAAAQAZaGFkb29wMS5yZHByYXR= 0aS5jb206ODA0MQAIY2xvdWRlcmFB00Yo\",realm=3D\"default\",nonce=3D\"5IKHDJ+= cIM8L1TWOFB0V+YcEVOY9NMTn4yap3bXQ\",nc=3D00000001,cnonce=3D\"3PgX1blybdZJ= GfUSB0O+o2r7oUIvfwCjWm1jD2M8\",digest-uri=3D\"/default\",maxbuf=3D65536,r= esponse=3Dbe225e76988dc1e3703a583d67c3e89c,qop=3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= 2015-02-21 19:01:56,077 DEBUG [ContainerLauncher #7] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D53d54dcb30e1611451e251b2bead9299"=0A= =0A= 2015-02-21 19:01:56,077 DEBUG [ContainerLauncher #7] = org.apache.hadoop.ipc.Client: Negotiated QOP is :auth=0A= 2015-02-21 19:01:56,077 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: starting, having connections 3=0A= 2015-02-21 19:01:56,078 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001 sending #79=0A= 2015-02-21 19:01:56,085 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001 got value #79=0A= 2015-02-21 19:01:56,085 DEBUG [ContainerLauncher #7] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: stopContainers took 15ms=0A= 2015-02-21 19:01:56,085 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent.EventType: = TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:56,085 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Processing = attempt_1424550134651_0002_r_000002_0 of type TA_CONTAINER_CLEANED=0A= 2015-02-21 19:01:56,086 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: = attempt_1424550134651_0002_r_000002_0 TaskAttempt Transitioned from = SUCCESS_CONTAINER_CLEANUP to SUCCEEDED=0A= 2015-02-21 19:01:56,086 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCounterUpdateEvent.EventT= ype: JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:56,086 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COUNTER_UPDATE=0A= 2015-02-21 19:01:56,086 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = REDUCE_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:56,086 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent.EventType:= T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:56,086 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:56,086 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Processing = task_1424550134651_0002_r_000002 of type T_ATTEMPT_SUCCEEDED=0A= 2015-02-21 19:01:56,086 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded = with attempt attempt_1424550134651_0002_r_000002_0=0A= 2015-02-21 19:01:56,086 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: = task_1424550134651_0002_r_000002 Task Transitioned from RUNNING to = SUCCEEDED=0A= 2015-02-21 19:01:56,086 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.speculate.SpeculatorEvent.EventType: = ATTEMPT_STATUS_UPDATE=0A= 2015-02-21 19:01:56,086 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskAttemptCompletedEvent= .EventType: JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:56,086 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_ATTEMPT_COMPLETED=0A= 2015-02-21 19:01:56,086 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobTaskEvent.EventType: = JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:56,086 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_TASK_COMPLETED=0A= 2015-02-21 19:01:56,086 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed = Tasks: 9=0A= 2015-02-21 19:01:56,087 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: closed=0A= 2015-02-21 19:01:56,087 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: = job_1424550134651_0002Job Transitioned from RUNNING to COMMITTING=0A= 2015-02-21 19:01:56,087 DEBUG [IPC Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001] org.apache.hadoop.ipc.Client: IPC = Client (1541092382) connection to = hadoop1.rdpratti.com/192.168.2.250:8041 from = appattempt_1424550134651_0002_000001: stopped, remaining connections 2=0A= 2015-02-21 19:01:56,087 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = TASK_FINISHED=0A= 2015-02-21 19:01:56,088 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.commit.CommitterJobCommitEvent.EventTy= pe: JOB_COMMIT=0A= 2015-02-21 19:01:56,089 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D16, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D65024=0A= 2015-02-21 19:01:56,089 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:56,089 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 68908 lastFlushOffset 65201=0A= 2015-02-21 19:01:56,089 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 16=0A= 2015-02-21 19:01:56,089 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 16=0A= 2015-02-21 19:01:56,089 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:16 offsetInBlock:65024 lastPacketInBlock:false = lastByteOffsetInBlock: 68908=0A= 2015-02-21 19:01:56,090 INFO [CommitterEvent Processor #1] = org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: = Processing the event EventType: JOB_COMMIT=0A= 2015-02-21 19:01:56,090 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.hdfs.DFSClient: = /user/cloudera/.staging/job_1424550134651_0002/COMMIT_STARTED: = masked=3Drw-r--r--=0A= 2015-02-21 19:01:56,091 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #80=0A= 2015-02-21 19:01:56,093 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 16 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1931718=0A= 2015-02-21 19:01:56,093 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler REDUCE_ATTEMPT_FINISHED=0A= 2015-02-21 19:01:56,093 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:56,095 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D17, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D68608=0A= 2015-02-21 19:01:56,095 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:56,096 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 72138 lastFlushOffset 68908=0A= 2015-02-21 19:01:56,096 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 17=0A= 2015-02-21 19:01:56,096 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:17 offsetInBlock:68608 lastPacketInBlock:false = lastByteOffsetInBlock: 72138=0A= 2015-02-21 19:01:56,096 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 17=0A= 2015-02-21 19:01:56,098 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #80=0A= 2015-02-21 19:01:56,098 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 17 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1471309=0A= 2015-02-21 19:01:56,098 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: create took 8ms=0A= 2015-02-21 19:01:56,098 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler TASK_FINISHED=0A= 2015-02-21 19:01:56,098 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.hdfs.DFSClient: computePacketChunkSize: = src=3D/user/cloudera/.staging/job_1424550134651_0002/COMMIT_STARTED, = chunkSize=3D516, chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 19:01:56,102 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: -1=0A= 2015-02-21 19:01:56,107 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #81=0A= 2015-02-21 19:01:56,120 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #81=0A= 2015-02-21 19:01:56,120 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: complete took 13ms=0A= 2015-02-21 19:01:56,123 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #82=0A= 2015-02-21 19:01:56,125 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #82=0A= 2015-02-21 19:01:56,125 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getListing took 2ms=0A= 2015-02-21 19:01:56,130 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Merging data = from = FileStatus{path=3Dhdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordleng= ths4/_temporary/1/task_1424550134651_0002_r_000000; isDirectory=3Dtrue; = modification_time=3D1424563309533; access_time=3D0; owner=3Dcloudera; = group=3Dcloudera; permission=3Drwxr-xr-x; isSymlink=3Dfalse} to = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordlengths4=0A= 2015-02-21 19:01:56,131 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #83=0A= 2015-02-21 19:01:56,132 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #83=0A= 2015-02-21 19:01:56,132 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,132 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #84=0A= 2015-02-21 19:01:56,133 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #84=0A= 2015-02-21 19:01:56,133 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,133 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #85=0A= 2015-02-21 19:01:56,134 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #85=0A= 2015-02-21 19:01:56,134 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getListing took 1ms=0A= 2015-02-21 19:01:56,135 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Merging data = from = FileStatus{path=3Dhdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordleng= ths4/_temporary/1/task_1424550134651_0002_r_000000/part-r-00000; = isDirectory=3Dfalse; length=3D175; replication=3D3; = blocksize=3D134217728; modification_time=3D1424563310752; = access_time=3D1424563309533; owner=3Dcloudera; group=3Dcloudera; = permission=3Drw-r--r--; isSymlink=3Dfalse} to = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordlengths4/part-r-00000=0A= 2015-02-21 19:01:56,135 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #86=0A= 2015-02-21 19:01:56,136 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #86=0A= 2015-02-21 19:01:56,136 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,138 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #87=0A= 2015-02-21 19:01:56,142 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #87=0A= 2015-02-21 19:01:56,142 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: rename took 4ms=0A= 2015-02-21 19:01:56,145 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Merging data = from = FileStatus{path=3Dhdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordleng= ths4/_temporary/1/task_1424550134651_0002_r_000001; isDirectory=3Dtrue; = modification_time=3D1424563306249; access_time=3D0; owner=3Dcloudera; = group=3Dcloudera; permission=3Drwxr-xr-x; isSymlink=3Dfalse} to = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordlengths4=0A= 2015-02-21 19:01:56,145 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #88=0A= 2015-02-21 19:01:56,146 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #88=0A= 2015-02-21 19:01:56,146 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,146 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #89=0A= 2015-02-21 19:01:56,147 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #89=0A= 2015-02-21 19:01:56,147 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,148 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #90=0A= 2015-02-21 19:01:56,148 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #90=0A= 2015-02-21 19:01:56,148 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getListing took 1ms=0A= 2015-02-21 19:01:56,149 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Merging data = from = FileStatus{path=3Dhdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordleng= ths4/_temporary/1/task_1424550134651_0002_r_000001/part-r-00001; = isDirectory=3Dfalse; length=3D167; replication=3D3; = blocksize=3D134217728; modification_time=3D1424563306847; = access_time=3D1424563306249; owner=3Dcloudera; group=3Dcloudera; = permission=3Drw-r--r--; isSymlink=3Dfalse} to = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordlengths4/part-r-00001=0A= 2015-02-21 19:01:56,149 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #91=0A= 2015-02-21 19:01:56,150 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #91=0A= 2015-02-21 19:01:56,150 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,150 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #92=0A= 2015-02-21 19:01:56,164 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #92=0A= 2015-02-21 19:01:56,164 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: rename took 14ms=0A= 2015-02-21 19:01:56,164 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Merging data = from = FileStatus{path=3Dhdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordleng= ths4/_temporary/1/task_1424550134651_0002_r_000002; isDirectory=3Dtrue; = modification_time=3D1424563314721; access_time=3D0; owner=3Dcloudera; = group=3Dcloudera; permission=3Drwxr-xr-x; isSymlink=3Dfalse} to = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordlengths4=0A= 2015-02-21 19:01:56,165 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #93=0A= 2015-02-21 19:01:56,165 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #93=0A= 2015-02-21 19:01:56,166 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,166 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #94=0A= 2015-02-21 19:01:56,167 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #94=0A= 2015-02-21 19:01:56,167 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,167 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #95=0A= 2015-02-21 19:01:56,168 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #95=0A= 2015-02-21 19:01:56,168 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getListing took 1ms=0A= 2015-02-21 19:01:56,168 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Merging data = from = FileStatus{path=3Dhdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordleng= ths4/_temporary/1/task_1424550134651_0002_r_000002/part-r-00002; = isDirectory=3Dfalse; length=3D132; replication=3D3; = blocksize=3D134217728; modification_time=3D1424563315803; = access_time=3D1424563314721; owner=3Dcloudera; group=3Dcloudera; = permission=3Drw-r--r--; isSymlink=3Dfalse} to = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordlengths4/part-r-00002=0A= 2015-02-21 19:01:56,168 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #96=0A= 2015-02-21 19:01:56,169 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #96=0A= 2015-02-21 19:01:56,169 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,170 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #97=0A= 2015-02-21 19:01:56,175 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #97=0A= 2015-02-21 19:01:56,175 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: rename took 5ms=0A= 2015-02-21 19:01:56,175 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Merging data = from = FileStatus{path=3Dhdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordleng= ths4/_temporary/1/task_1424550134651_0002_r_000003; isDirectory=3Dtrue; = modification_time=3D1424563312890; access_time=3D0; owner=3Dcloudera; = group=3Dcloudera; permission=3Drwxr-xr-x; isSymlink=3Dfalse} to = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordlengths4=0A= 2015-02-21 19:01:56,176 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #98=0A= 2015-02-21 19:01:56,177 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #98=0A= 2015-02-21 19:01:56,177 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,177 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #99=0A= 2015-02-21 19:01:56,178 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #99=0A= 2015-02-21 19:01:56,178 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,178 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #100=0A= 2015-02-21 19:01:56,179 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #100=0A= 2015-02-21 19:01:56,179 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getListing took 1ms=0A= 2015-02-21 19:01:56,179 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Merging data = from = FileStatus{path=3Dhdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordleng= ths4/_temporary/1/task_1424550134651_0002_r_000003/part-r-00003; = isDirectory=3Dfalse; length=3D150; replication=3D3; = blocksize=3D134217728; modification_time=3D1424563313574; = access_time=3D1424563312890; owner=3Dcloudera; group=3Dcloudera; = permission=3Drw-r--r--; isSymlink=3Dfalse} to = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/wordlengths4/part-r-00003=0A= 2015-02-21 19:01:56,179 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #101=0A= 2015-02-21 19:01:56,180 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #101=0A= 2015-02-21 19:01:56,180 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,180 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #102=0A= 2015-02-21 19:01:56,186 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #102=0A= 2015-02-21 19:01:56,187 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: rename took 7ms=0A= 2015-02-21 19:01:56,189 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #103=0A= 2015-02-21 19:01:56,198 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #103=0A= 2015-02-21 19:01:56,198 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: delete took 9ms=0A= 2015-02-21 19:01:56,200 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.hdfs.DFSClient: /user/cloudera/wordlengths4/_SUCCESS: = masked=3Drw-r--r--=0A= 2015-02-21 19:01:56,200 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #104=0A= 2015-02-21 19:01:56,209 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #104=0A= 2015-02-21 19:01:56,209 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: create took 9ms=0A= 2015-02-21 19:01:56,209 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.hdfs.DFSClient: computePacketChunkSize: = src=3D/user/cloudera/wordlengths4/_SUCCESS, chunkSize=3D516, = chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 19:01:56,209 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: -1=0A= 2015-02-21 19:01:56,210 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #105=0A= 2015-02-21 19:01:56,220 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #105=0A= 2015-02-21 19:01:56,220 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: complete took 11ms=0A= 2015-02-21 19:01:56,220 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.hdfs.DFSClient: = /user/cloudera/.staging/job_1424550134651_0002/COMMIT_SUCCESS: = masked=3Drw-r--r--=0A= 2015-02-21 19:01:56,220 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #106=0A= 2015-02-21 19:01:56,231 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #106=0A= 2015-02-21 19:01:56,231 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: create took 11ms=0A= 2015-02-21 19:01:56,231 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.hdfs.DFSClient: computePacketChunkSize: = src=3D/user/cloudera/.staging/job_1424550134651_0002/COMMIT_SUCCESS, = chunkSize=3D516, chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 19:01:56,232 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: -1=0A= 2015-02-21 19:01:56,232 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #107=0A= 2015-02-21 19:01:56,242 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #107=0A= 2015-02-21 19:01:56,242 DEBUG [CommitterEvent Processor #1] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: complete took 10ms=0A= 2015-02-21 19:01:56,243 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobCommitCompletedEvent.Even= tType: JOB_COMMIT_COMPLETED=0A= 2015-02-21 19:01:56,243 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_COMMIT_COMPLETED=0A= 2015-02-21 19:01:56,247 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Calling handler for = JobFinishedEvent =0A= 2015-02-21 19:01:56,248 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: = job_1424550134651_0002Job Transitioned from COMMITTING to SUCCEEDED=0A= 2015-02-21 19:01:56,248 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = JOB_FINISHED=0A= 2015-02-21 19:01:56,248 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent.EventType: = STATE_CHANGED=0A= 2015-02-21 19:01:56,248 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Writing = event=0A= 2015-02-21 19:01:56,248 INFO [Thread-88] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly = so this is the last retry=0A= 2015-02-21 19:01:56,248 INFO [Thread-88] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator = isAMLastRetry: true=0A= 2015-02-21 19:01:56,249 INFO [Thread-88] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = RMCommunicator notified that shouldUnregistered is: true=0A= 2015-02-21 19:01:56,249 INFO [Thread-88] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH = isAMLastRetry: true=0A= 2015-02-21 19:01:56,249 INFO [Thread-88] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: = JobHistoryEventHandler notified that forceJobCompletion is true=0A= 2015-02-21 19:01:56,249 INFO [Thread-88] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the = services=0A= 2015-02-21 19:01:56,249 DEBUG [Thread-88] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster entered state STOPPED=0A= 2015-02-21 19:01:56,249 DEBUG [Thread-88] = org.apache.hadoop.service.CompositeService: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: stopping services, = size=3D7=0A= 2015-02-21 19:01:56,249 DEBUG [Thread-88] = org.apache.hadoop.service.CompositeService: Stopping service #6: Service = JobHistoryEventHandler in state JobHistoryEventHandler: STARTED=0A= 2015-02-21 19:01:56,250 DEBUG [Thread-88] = org.apache.hadoop.service.AbstractService: Service: = JobHistoryEventHandler entered state STOPPED=0A= 2015-02-21 19:01:56,250 INFO [Thread-88] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping = JobHistoryEventHandler. Size of the outstanding queue size is 0=0A= 2015-02-21 19:01:56,256 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D18, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D71680=0A= 2015-02-21 19:01:56,257 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Flushing = Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:56,257 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient flush() : bytesCurBlock = 82645 lastFlushOffset 72138=0A= 2015-02-21 19:01:56,257 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 18=0A= 2015-02-21 19:01:56,257 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 18=0A= 2015-02-21 19:01:56,257 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:18 offsetInBlock:71680 lastPacketInBlock:false = lastByteOffsetInBlock: 82645=0A= 2015-02-21 19:01:56,260 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 18 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1910866=0A= 2015-02-21 19:01:56,260 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In = HistoryEventHandler JOB_FINISHED=0A= 2015-02-21 19:01:56,260 DEBUG [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Closing = Writer=0A= 2015-02-21 19:01:56,260 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D19, = src=3D/user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_00= 02_1.jhist, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D82432=0A= 2015-02-21 19:01:56,260 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 19=0A= 2015-02-21 19:01:56,260 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 20=0A= 2015-02-21 19:01:56,260 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 20=0A= 2015-02-21 19:01:56,260 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:19 offsetInBlock:82432 lastPacketInBlock:false = lastByteOffsetInBlock: 82645=0A= 2015-02-21 19:01:56,262 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 19 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1125734=0A= 2015-02-21 19:01:56,262 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742 sending = packet packet seqno:20 offsetInBlock:82645 lastPacketInBlock:true = lastByteOffsetInBlock: 82645=0A= 2015-02-21 19:01:56,268 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 20 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 3993833=0A= 2015-02-21 19:01:56,269 DEBUG [DataStreamer for file = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742] = org.apache.hadoop.hdfs.DFSClient: Closing old block = BP-268700609-192.168.2.253-1419532004456:blk_1073754566_13742=0A= 2015-02-21 19:01:56,269 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #108=0A= 2015-02-21 19:01:56,275 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #108=0A= 2015-02-21 19:01:56,275 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: complete took 6ms=0A= 2015-02-21 19:01:56,276 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: = /user/history/done_intermediate/cloudera/job_1424550134651_0002.summary_t= mp: masked=3Drw-r--r--=0A= 2015-02-21 19:01:56,276 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #109=0A= 2015-02-21 19:01:56,287 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #109=0A= 2015-02-21 19:01:56,287 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: create took 11ms=0A= 2015-02-21 19:01:56,287 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: computePacketChunkSize: = src=3D/user/history/done_intermediate/cloudera/job_1424550134651_0002.sum= mary_tmp, chunkSize=3D516, chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 19:01:56,287 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D0, = src=3D/user/history/done_intermediate/cloudera/job_1424550134651_0002.sum= mary_tmp, packetSize=3D65532, chunksPerPacket=3D127, bytesCurBlock=3D0=0A= 2015-02-21 19:01:56,287 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 0=0A= 2015-02-21 19:01:56,287 DEBUG [Thread-89] = org.apache.hadoop.hdfs.DFSClient: Allocating new block=0A= 2015-02-21 19:01:56,287 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 1=0A= 2015-02-21 19:01:56,287 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 1=0A= 2015-02-21 19:01:56,288 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #110=0A= 2015-02-21 19:01:56,298 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #110=0A= 2015-02-21 19:01:56,298 DEBUG [Thread-89] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: addBlock took 10ms=0A= 2015-02-21 19:01:56,298 DEBUG [Thread-89] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.253:50010=0A= 2015-02-21 19:01:56,298 DEBUG [Thread-89] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.252:50010=0A= 2015-02-21 19:01:56,298 DEBUG [Thread-89] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.250:50010=0A= 2015-02-21 19:01:56,298 DEBUG [Thread-89] = org.apache.hadoop.hdfs.DFSClient: Connecting to datanode = 192.168.2.253:50010=0A= 2015-02-21 19:01:56,299 DEBUG [Thread-89] = org.apache.hadoop.hdfs.DFSClient: Send buf size 124928=0A= 2015-02-21 19:01:56,299 DEBUG [Thread-89] = org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient:= SASL client skipping handshake in unsecured configuration for addr =3D = /192.168.2.253, datanodeId =3D 192.168.2.253:50010=0A= 2015-02-21 19:01:56,305 DEBUG [DataStreamer for file = /user/history/done_intermediate/cloudera/job_1424550134651_0002.summary_t= mp block BP-268700609-192.168.2.253-1419532004456:blk_1073754571_13747] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754571_13747 sending = packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false = lastByteOffsetInBlock: 362=0A= 2015-02-21 19:01:56,309 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754571_13747] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 2141803=0A= 2015-02-21 19:01:56,309 DEBUG [DataStreamer for file = /user/history/done_intermediate/cloudera/job_1424550134651_0002.summary_t= mp block BP-268700609-192.168.2.253-1419532004456:blk_1073754571_13747] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754571_13747 sending = packet packet seqno:1 offsetInBlock:362 lastPacketInBlock:true = lastByteOffsetInBlock: 362=0A= 2015-02-21 19:01:56,316 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754571_13747] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 3398163=0A= 2015-02-21 19:01:56,316 DEBUG [DataStreamer for file = /user/history/done_intermediate/cloudera/job_1424550134651_0002.summary_t= mp block BP-268700609-192.168.2.253-1419532004456:blk_1073754571_13747] = org.apache.hadoop.hdfs.DFSClient: Closing old block = BP-268700609-192.168.2.253-1419532004456:blk_1073754571_13747=0A= 2015-02-21 19:01:56,316 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #111=0A= 2015-02-21 19:01:56,320 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #111=0A= 2015-02-21 19:01:56,320 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: complete took 4ms=0A= 2015-02-21 19:01:56,322 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #112=0A= 2015-02-21 19:01:56,331 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #112=0A= 2015-02-21 19:01:56,331 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: setPermission took 9ms=0A= 2015-02-21 19:01:56,335 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #113=0A= 2015-02-21 19:01:56,336 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #113=0A= 2015-02-21 19:01:56,336 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,336 INFO [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist to = hdfs://hadoop0.rdpratti.com:8020/user/history/done_intermediate/cloudera/= job_1424550134651_0002-1424563268266-cloudera-Average+Word+Length-1424563= 316243-5-4-SUCCEEDED-root.cloudera-1424563279478.jhist_tmp=0A= 2015-02-21 19:01:56,337 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #114=0A= 2015-02-21 19:01:56,337 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #114=0A= 2015-02-21 19:01:56,337 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 0ms=0A= 2015-02-21 19:01:56,340 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #115=0A= 2015-02-21 19:01:56,341 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #115=0A= 2015-02-21 19:01:56,341 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,342 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #116=0A= 2015-02-21 19:01:56,342 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #116=0A= 2015-02-21 19:01:56,342 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 0ms=0A= 2015-02-21 19:01:56,343 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #117=0A= 2015-02-21 19:01:56,345 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #117=0A= 2015-02-21 19:01:56,345 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getBlockLocations took 2ms=0A= 2015-02-21 19:01:56,346 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: newInfo =3D LocatedBlocks{=0A= fileLength=3D82645=0A= underConstruction=3Dfalse=0A= = blocks=3D[LocatedBlock{BP-268700609-192.168.2.253-1419532004456:blk_10737= 54566_13742; getBlockSize()=3D82645; corrupt=3Dfalse; offset=3D0; = locs=3D[192.168.2.253:50010, 192.168.2.251:50010, 192.168.2.252:50010]}]=0A= = lastLocatedBlock=3DLocatedBlock{BP-268700609-192.168.2.253-1419532004456:= blk_1073754566_13742; getBlockSize()=3D82645; corrupt=3Dfalse; = offset=3D0; locs=3D[192.168.2.253:50010, 192.168.2.251:50010, = 192.168.2.252:50010]}=0A= isLastBlockComplete=3Dtrue}=0A= 2015-02-21 19:01:56,346 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: = /user/history/done_intermediate/cloudera/job_1424550134651_0002-142456326= 8266-cloudera-Average+Word+Length-1424563316243-5-4-SUCCEEDED-root.cloude= ra-1424563279478.jhist_tmp: masked=3Drw-r--r--=0A= 2015-02-21 19:01:56,346 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #118=0A= 2015-02-21 19:01:56,353 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #118=0A= 2015-02-21 19:01:56,353 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: create took 7ms=0A= 2015-02-21 19:01:56,353 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: computePacketChunkSize: = src=3D/user/history/done_intermediate/cloudera/job_1424550134651_0002-142= 4563268266-cloudera-Average+Word+Length-1424563316243-5-4-SUCCEEDED-root.= cloudera-1424563279478.jhist_tmp, chunkSize=3D516, = chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 19:01:56,354 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Connecting to datanode = 192.168.2.253:50010=0A= 2015-02-21 19:01:56,355 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient:= SASL client skipping handshake in unsecured configuration for addr =3D = /192.168.2.253, datanodeId =3D 192.168.2.253:50010=0A= 2015-02-21 19:01:56,361 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D0, = src=3D/user/history/done_intermediate/cloudera/job_1424550134651_0002-142= 4563268266-cloudera-Average+Word+Length-1424563316243-5-4-SUCCEEDED-root.= cloudera-1424563279478.jhist_tmp, packetSize=3D65532, = chunksPerPacket=3D127, bytesCurBlock=3D0=0A= 2015-02-21 19:01:56,363 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk packet full = seqno=3D0, = src=3D/user/history/done_intermediate/cloudera/job_1424550134651_0002-142= 4563268266-cloudera-Average+Word+Length-1424563316243-5-4-SUCCEEDED-root.= cloudera-1424563279478.jhist_tmp, bytesCurBlock=3D65024, = blockSize=3D134217728, appendChunk=3Dfalse=0A= 2015-02-21 19:01:56,363 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 0=0A= 2015-02-21 19:01:56,363 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: computePacketChunkSize: = src=3D/user/history/done_intermediate/cloudera/job_1424550134651_0002-142= 4563268266-cloudera-Average+Word+Length-1424563316243-5-4-SUCCEEDED-root.= cloudera-1424563279478.jhist_tmp, chunkSize=3D516, = chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 19:01:56,363 DEBUG [Thread-91] = org.apache.hadoop.hdfs.DFSClient: Allocating new block=0A= 2015-02-21 19:01:56,363 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D1, = src=3D/user/history/done_intermediate/cloudera/job_1424550134651_0002-142= 4563268266-cloudera-Average+Word+Length-1424563316243-5-4-SUCCEEDED-root.= cloudera-1424563279478.jhist_tmp, packetSize=3D65532, = chunksPerPacket=3D127, bytesCurBlock=3D65024=0A= 2015-02-21 19:01:56,363 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 1=0A= 2015-02-21 19:01:56,363 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 2=0A= 2015-02-21 19:01:56,363 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 2=0A= 2015-02-21 19:01:56,363 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #119=0A= 2015-02-21 19:01:56,376 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #119=0A= 2015-02-21 19:01:56,376 DEBUG [Thread-91] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: addBlock took 13ms=0A= 2015-02-21 19:01:56,376 DEBUG [Thread-91] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.253:50010=0A= 2015-02-21 19:01:56,376 DEBUG [Thread-91] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.250:50010=0A= 2015-02-21 19:01:56,376 DEBUG [Thread-91] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.252:50010=0A= 2015-02-21 19:01:56,376 DEBUG [Thread-91] = org.apache.hadoop.hdfs.DFSClient: Connecting to datanode = 192.168.2.253:50010=0A= 2015-02-21 19:01:56,376 DEBUG [Thread-91] = org.apache.hadoop.hdfs.DFSClient: Send buf size 124928=0A= 2015-02-21 19:01:56,376 DEBUG [Thread-91] = org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient:= SASL client skipping handshake in unsecured configuration for addr =3D = /192.168.2.253, datanodeId =3D 192.168.2.253:50010=0A= 2015-02-21 19:01:56,390 DEBUG [DataStreamer for file = /user/history/done_intermediate/cloudera/job_1424550134651_0002-142456326= 8266-cloudera-Average+Word+Length-1424563316243-5-4-SUCCEEDED-root.cloude= ra-1424563279478.jhist_tmp block = BP-268700609-192.168.2.253-1419532004456:blk_1073754572_13748] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754572_13748 sending = packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false = lastByteOffsetInBlock: 65024=0A= 2015-02-21 19:01:56,390 DEBUG [DataStreamer for file = /user/history/done_intermediate/cloudera/job_1424550134651_0002-142456326= 8266-cloudera-Average+Word+Length-1424563316243-5-4-SUCCEEDED-root.cloude= ra-1424563279478.jhist_tmp block = BP-268700609-192.168.2.253-1419532004456:blk_1073754572_13748] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754572_13748 sending = packet packet seqno:1 offsetInBlock:65024 lastPacketInBlock:false = lastByteOffsetInBlock: 82645=0A= 2015-02-21 19:01:56,397 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754572_13748] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 5524750=0A= 2015-02-21 19:01:56,397 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754572_13748] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 5446790=0A= 2015-02-21 19:01:56,397 DEBUG [DataStreamer for file = /user/history/done_intermediate/cloudera/job_1424550134651_0002-142456326= 8266-cloudera-Average+Word+Length-1424563316243-5-4-SUCCEEDED-root.cloude= ra-1424563279478.jhist_tmp block = BP-268700609-192.168.2.253-1419532004456:blk_1073754572_13748] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754572_13748 sending = packet packet seqno:2 offsetInBlock:82645 lastPacketInBlock:true = lastByteOffsetInBlock: 82645=0A= 2015-02-21 19:01:56,403 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754572_13748] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 2 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 3494303=0A= 2015-02-21 19:01:56,404 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #120=0A= 2015-02-21 19:01:56,442 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #120=0A= 2015-02-21 19:01:56,442 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: complete took 38ms=0A= 2015-02-21 19:01:56,442 INFO [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to = done location: = hdfs://hadoop0.rdpratti.com:8020/user/history/done_intermediate/cloudera/= job_1424550134651_0002-1424563268266-cloudera-Average+Word+Length-1424563= 316243-5-4-SUCCEEDED-root.cloudera-1424563279478.jhist_tmp=0A= 2015-02-21 19:01:56,443 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #121=0A= 2015-02-21 19:01:56,453 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #121=0A= 2015-02-21 19:01:56,453 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: setPermission took 10ms=0A= 2015-02-21 19:01:56,454 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #122=0A= 2015-02-21 19:01:56,455 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #122=0A= 2015-02-21 19:01:56,455 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,455 INFO [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1_conf.xml to = hdfs://hadoop0.rdpratti.com:8020/user/history/done_intermediate/cloudera/= job_1424550134651_0002_conf.xml_tmp=0A= 2015-02-21 19:01:56,455 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #123=0A= 2015-02-21 19:01:56,456 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #123=0A= 2015-02-21 19:01:56,456 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,456 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #124=0A= 2015-02-21 19:01:56,457 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #124=0A= 2015-02-21 19:01:56,457 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,457 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #125=0A= 2015-02-21 19:01:56,458 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #125=0A= 2015-02-21 19:01:56,458 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getFileInfo took 1ms=0A= 2015-02-21 19:01:56,458 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #126=0A= 2015-02-21 19:01:56,459 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #126=0A= 2015-02-21 19:01:56,460 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: getBlockLocations took 1ms=0A= 2015-02-21 19:01:56,460 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: newInfo =3D LocatedBlocks{=0A= fileLength=3D108018=0A= underConstruction=3Dfalse=0A= = blocks=3D[LocatedBlock{BP-268700609-192.168.2.253-1419532004456:blk_10737= 54564_13740; getBlockSize()=3D108018; corrupt=3Dfalse; offset=3D0; = locs=3D[192.168.2.253:50010, 192.168.2.250:50010, 192.168.2.252:50010]}]=0A= = lastLocatedBlock=3DLocatedBlock{BP-268700609-192.168.2.253-1419532004456:= blk_1073754564_13740; getBlockSize()=3D108018; corrupt=3Dfalse; = offset=3D0; locs=3D[192.168.2.253:50010, 192.168.2.252:50010, = 192.168.2.250:50010]}=0A= isLastBlockComplete=3Dtrue}=0A= 2015-02-21 19:01:56,460 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: = /user/history/done_intermediate/cloudera/job_1424550134651_0002_conf.xml_= tmp: masked=3Drw-r--r--=0A= 2015-02-21 19:01:56,460 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #127=0A= 2015-02-21 19:01:56,475 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #127=0A= 2015-02-21 19:01:56,476 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: create took 16ms=0A= 2015-02-21 19:01:56,476 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: computePacketChunkSize: = src=3D/user/history/done_intermediate/cloudera/job_1424550134651_0002_con= f.xml_tmp, chunkSize=3D516, chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 19:01:56,476 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Connecting to datanode = 192.168.2.253:50010=0A= 2015-02-21 19:01:56,478 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D0, = src=3D/user/history/done_intermediate/cloudera/job_1424550134651_0002_con= f.xml_tmp, packetSize=3D65532, chunksPerPacket=3D127, bytesCurBlock=3D0=0A= 2015-02-21 19:01:56,479 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk packet full = seqno=3D0, = src=3D/user/history/done_intermediate/cloudera/job_1424550134651_0002_con= f.xml_tmp, bytesCurBlock=3D65024, blockSize=3D134217728, = appendChunk=3Dfalse=0A= 2015-02-21 19:01:56,479 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 0=0A= 2015-02-21 19:01:56,479 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: computePacketChunkSize: = src=3D/user/history/done_intermediate/cloudera/job_1424550134651_0002_con= f.xml_tmp, chunkSize=3D516, chunksPerPacket=3D127, packetSize=3D65532=0A= 2015-02-21 19:01:56,479 DEBUG [Thread-93] = org.apache.hadoop.hdfs.DFSClient: Allocating new block=0A= 2015-02-21 19:01:56,479 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: DFSClient writeChunk allocating new = packet seqno=3D1, = src=3D/user/history/done_intermediate/cloudera/job_1424550134651_0002_con= f.xml_tmp, packetSize=3D65532, chunksPerPacket=3D127, = bytesCurBlock=3D65024=0A= 2015-02-21 19:01:56,480 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 1=0A= 2015-02-21 19:01:56,480 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Queued packet 2=0A= 2015-02-21 19:01:56,480 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: Waiting for ack for: 2=0A= 2015-02-21 19:01:56,480 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #128=0A= 2015-02-21 19:01:56,487 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #128=0A= 2015-02-21 19:01:56,487 DEBUG [Thread-93] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: addBlock took 8ms=0A= 2015-02-21 19:01:56,487 DEBUG [Thread-93] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.253:50010=0A= 2015-02-21 19:01:56,487 DEBUG [Thread-93] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.252:50010=0A= 2015-02-21 19:01:56,487 DEBUG [Thread-93] = org.apache.hadoop.hdfs.DFSClient: pipeline =3D 192.168.2.250:50010=0A= 2015-02-21 19:01:56,487 DEBUG [Thread-93] = org.apache.hadoop.hdfs.DFSClient: Connecting to datanode = 192.168.2.253:50010=0A= 2015-02-21 19:01:56,488 DEBUG [Thread-93] = org.apache.hadoop.hdfs.DFSClient: Send buf size 124928=0A= 2015-02-21 19:01:56,488 DEBUG [Thread-93] = org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient:= SASL client skipping handshake in unsecured configuration for addr =3D = /192.168.2.253, datanodeId =3D 192.168.2.253:50010=0A= 2015-02-21 19:01:56,494 DEBUG [DataStreamer for file = /user/history/done_intermediate/cloudera/job_1424550134651_0002_conf.xml_= tmp block BP-268700609-192.168.2.253-1419532004456:blk_1073754573_13749] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754573_13749 sending = packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false = lastByteOffsetInBlock: 65024=0A= 2015-02-21 19:01:56,494 DEBUG [DataStreamer for file = /user/history/done_intermediate/cloudera/job_1424550134651_0002_conf.xml_= tmp block BP-268700609-192.168.2.253-1419532004456:blk_1073754573_13749] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754573_13749 sending = packet packet seqno:1 offsetInBlock:65024 lastPacketInBlock:false = lastByteOffsetInBlock: 108018=0A= 2015-02-21 19:01:56,499 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754573_13749] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 0 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 4387447=0A= 2015-02-21 19:01:56,500 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754573_13749] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 1 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 4296828=0A= 2015-02-21 19:01:56,500 DEBUG [DataStreamer for file = /user/history/done_intermediate/cloudera/job_1424550134651_0002_conf.xml_= tmp block BP-268700609-192.168.2.253-1419532004456:blk_1073754573_13749] = org.apache.hadoop.hdfs.DFSClient: DataStreamer block = BP-268700609-192.168.2.253-1419532004456:blk_1073754573_13749 sending = packet packet seqno:2 offsetInBlock:108018 lastPacketInBlock:true = lastByteOffsetInBlock: 108018=0A= 2015-02-21 19:01:56,505 DEBUG [ResponseProcessor for block = BP-268700609-192.168.2.253-1419532004456:blk_1073754573_13749] = org.apache.hadoop.hdfs.DFSClient: DFSClient seqno: 2 status: SUCCESS = status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 3794865=0A= 2015-02-21 19:01:56,507 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #129=0A= 2015-02-21 19:01:56,520 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #129=0A= 2015-02-21 19:01:56,520 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: complete took 13ms=0A= 2015-02-21 19:01:56,520 INFO [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to = done location: = hdfs://hadoop0.rdpratti.com:8020/user/history/done_intermediate/cloudera/= job_1424550134651_0002_conf.xml_tmp=0A= 2015-02-21 19:01:56,520 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #130=0A= 2015-02-21 19:01:56,531 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #130=0A= 2015-02-21 19:01:56,531 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: setPermission took 11ms=0A= 2015-02-21 19:01:56,531 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #131=0A= 2015-02-21 19:01:56,542 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #131=0A= 2015-02-21 19:01:56,542 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: rename took 11ms=0A= 2015-02-21 19:01:56,542 INFO [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp = to done: = hdfs://hadoop0.rdpratti.com:8020/user/history/done_intermediate/cloudera/= job_1424550134651_0002.summary_tmp to = hdfs://hadoop0.rdpratti.com:8020/user/history/done_intermediate/cloudera/= job_1424550134651_0002.summary=0A= 2015-02-21 19:01:56,543 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #132=0A= 2015-02-21 19:01:56,553 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #132=0A= 2015-02-21 19:01:56,553 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: rename took 10ms=0A= 2015-02-21 19:01:56,553 INFO [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp = to done: = hdfs://hadoop0.rdpratti.com:8020/user/history/done_intermediate/cloudera/= job_1424550134651_0002_conf.xml_tmp to = hdfs://hadoop0.rdpratti.com:8020/user/history/done_intermediate/cloudera/= job_1424550134651_0002_conf.xml=0A= 2015-02-21 19:01:56,554 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #133=0A= 2015-02-21 19:01:56,564 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #133=0A= 2015-02-21 19:01:56,564 DEBUG [eventHandlingThread] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: rename took 10ms=0A= 2015-02-21 19:01:56,564 INFO [eventHandlingThread] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp = to done: = hdfs://hadoop0.rdpratti.com:8020/user/history/done_intermediate/cloudera/= job_1424550134651_0002-1424563268266-cloudera-Average+Word+Length-1424563= 316243-5-4-SUCCEEDED-root.cloudera-1424563279478.jhist_tmp to = hdfs://hadoop0.rdpratti.com:8020/user/history/done_intermediate/cloudera/= job_1424550134651_0002-1424563268266-cloudera-Average+Word+Length-1424563= 316243-5-4-SUCCEEDED-root.cloudera-1424563279478.jhist=0A= 2015-02-21 19:01:56,565 DEBUG [Thread-88] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: = Interrupting Event Handling thread=0A= 2015-02-21 19:01:56,565 DEBUG [Thread-88] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Waiting = for Event Handling thread to complete=0A= 2015-02-21 19:01:56,565 DEBUG [Thread-88] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Shutting = down timer for Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:56,565 DEBUG [Thread-88] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Shutting = down timer Job MetaInfo for job_1424550134651_0002 history file = hdfs://hadoop0.rdpratti.com:8020/user/cloudera/.staging/job_1424550134651= _0002/job_1424550134651_0002_1.jhist=0A= 2015-02-21 19:01:56,565 DEBUG [Thread-88] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Closing = Writer=0A= 2015-02-21 19:01:56,565 INFO [Thread-88] = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped = JobHistoryEventHandler. super.stop()=0A= 2015-02-21 19:01:56,565 DEBUG [Thread-88] = org.apache.hadoop.service.CompositeService: Stopping service #5: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter = in state = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter: = STARTED=0A= 2015-02-21 19:01:56,565 DEBUG [Thread-88] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter = entered state STOPPED=0A= 2015-02-21 19:01:56,565 DEBUG [Thread-88] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl = entered state STOPPED=0A= 2015-02-21 19:01:56,566 DEBUG [Thread-88] = org.apache.hadoop.service.CompositeService: Stopping service #4: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter = in state = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter: = STARTED=0A= 2015-02-21 19:01:56,566 DEBUG [Thread-88] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter = entered state STOPPED=0A= 2015-02-21 19:01:56,568 DEBUG [Thread-88] = org.apache.hadoop.service.AbstractService: Service: RMCommunicator = entered state STOPPED=0A= 2015-02-21 19:01:56,569 INFO [Thread-88] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job = diagnostics to =0A= 2015-02-21 19:01:56,569 INFO [Thread-88] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url = is = http://hadoop0.rdpratti.com:19888/jobhistory/job/job_1424550134651_0002=0A= 2015-02-21 19:01:56,572 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #134=0A= 2015-02-21 19:01:56,583 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #134=0A= 2015-02-21 19:01:56,583 DEBUG [Thread-88] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: finishApplicationMaster = took 12ms=0A= 2015-02-21 19:01:56,586 INFO [Thread-88] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for = application to be successfully unregistered.=0A= 2015-02-21 19:01:57,073 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #177=0A= 2015-02-21 19:01:57,074 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#177 Retry#0 for RpcKind = RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:57,074 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:57,074 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getTaskAttemptCompletionEvents = queueTime=3D 1 procesingTime=3D 0=0A= 2015-02-21 19:01:57,074 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#177 Retry#0=0A= 2015-02-21 19:01:57,074 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getTaskAttemptCompl= etionEvents from 192.168.2.253:57473 Call#177 Retry#0 Wrote 102 bytes.=0A= 2015-02-21 19:01:57,076 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #178=0A= 2015-02-21 19:01:57,076 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#178 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:57,076 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:57,076 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 0 = procesingTime=3D 0=0A= 2015-02-21 19:01:57,076 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#178 Retry#0=0A= 2015-02-21 19:01:57,076 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#178 Retry#0 Wrote 286 bytes.=0A= 2015-02-21 19:01:57,077 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #179=0A= 2015-02-21 19:01:57,077 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#179 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:57,077 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:57,078 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: Served: getJobReport queueTime=3D 1 = procesingTime=3D 0=0A= 2015-02-21 19:01:57,078 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#179 Retry#0=0A= 2015-02-21 19:01:57,078 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: responding = to org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport = from 192.168.2.253:57473 Call#179 Retry#0 Wrote 286 bytes.=0A= 2015-02-21 19:01:57,586 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #135=0A= 2015-02-21 19:01:57,587 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #135=0A= 2015-02-21 19:01:57,588 DEBUG [Thread-88] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: finishApplicationMaster = took 2ms=0A= 2015-02-21 19:01:57,588 INFO [Thread-88] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: = PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 = AssignedReds:1 CompletedMaps:5 CompletedReds:3 ContAlloc:9 ContRel:0 = HostLocal:2 RackLocal:3=0A= 2015-02-21 19:01:57,588 DEBUG [Thread-88] = org.apache.hadoop.service.CompositeService: Stopping service #3: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$StagingDirCleaningService = in state = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$StagingDirCleaningService:= STARTED=0A= 2015-02-21 19:01:57,588 DEBUG [Thread-88] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$StagingDirCleaningService = entered state STOPPED=0A= 2015-02-21 19:01:57,589 INFO [Thread-88] = org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging = directory hdfs://hadoop0.rdpratti.com:8020 = /user/cloudera/.staging/job_1424550134651_0002=0A= 2015-02-21 19:01:57,589 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #136=0A= 2015-02-21 19:01:57,599 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #136=0A= 2015-02-21 19:01:57,599 DEBUG [Thread-88] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: delete took 10ms=0A= 2015-02-21 19:01:57,599 DEBUG [Thread-88] = org.apache.hadoop.service.CompositeService: Stopping service #2: Service = org.apache.hadoop.mapred.TaskAttemptListenerImpl in state = org.apache.hadoop.mapred.TaskAttemptListenerImpl: STARTED=0A= 2015-02-21 19:01:57,599 DEBUG [Thread-88] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapred.TaskAttemptListenerImpl entered state STOPPED=0A= 2015-02-21 19:01:57,599 INFO [Thread-88] org.apache.hadoop.ipc.Server: = Stopping server on 35954=0A= 2015-02-21 19:01:57,599 DEBUG [IPC Server handler 0 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 35954: exiting=0A= 2015-02-21 19:01:57,599 DEBUG [IPC Server handler 1 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 1 on 35954: exiting=0A= 2015-02-21 19:01:57,599 DEBUG [IPC Server handler 2 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 2 on 35954: exiting=0A= 2015-02-21 19:01:57,599 DEBUG [IPC Server handler 3 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 3 on 35954: exiting=0A= 2015-02-21 19:01:57,599 DEBUG [IPC Server handler 4 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 4 on 35954: exiting=0A= 2015-02-21 19:01:57,599 DEBUG [IPC Server handler 5 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 5 on 35954: exiting=0A= 2015-02-21 19:01:57,599 DEBUG [IPC Server handler 7 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 7 on 35954: exiting=0A= 2015-02-21 19:01:57,599 DEBUG [IPC Server handler 6 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 6 on 35954: exiting=0A= 2015-02-21 19:01:57,599 DEBUG [IPC Server handler 8 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 8 on 35954: exiting=0A= 2015-02-21 19:01:57,599 DEBUG [IPC Server handler 9 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 9 on 35954: exiting=0A= 2015-02-21 19:01:57,599 DEBUG [IPC Server handler 10 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 10 on 35954: exiting=0A= 2015-02-21 19:01:57,599 DEBUG [IPC Server handler 11 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 11 on 35954: exiting=0A= 2015-02-21 19:01:57,600 DEBUG [IPC Server handler 13 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 13 on 35954: exiting=0A= 2015-02-21 19:01:57,600 DEBUG [IPC Server handler 12 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 12 on 35954: exiting=0A= 2015-02-21 19:01:57,600 DEBUG [IPC Server handler 14 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 14 on 35954: exiting=0A= 2015-02-21 19:01:57,600 DEBUG [IPC Server handler 15 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 15 on 35954: exiting=0A= 2015-02-21 19:01:57,600 DEBUG [IPC Server handler 16 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 16 on 35954: exiting=0A= 2015-02-21 19:01:57,600 DEBUG [IPC Server handler 17 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 17 on 35954: exiting=0A= 2015-02-21 19:01:57,600 DEBUG [IPC Server handler 18 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 18 on 35954: exiting=0A= 2015-02-21 19:01:57,600 DEBUG [IPC Server handler 19 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 19 on 35954: exiting=0A= 2015-02-21 19:01:57,600 DEBUG [IPC Server handler 20 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 20 on 35954: exiting=0A= 2015-02-21 19:01:57,600 DEBUG [IPC Server handler 21 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 21 on 35954: exiting=0A= 2015-02-21 19:01:57,600 DEBUG [IPC Server handler 22 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 22 on 35954: exiting=0A= 2015-02-21 19:01:57,600 DEBUG [IPC Server handler 23 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 23 on 35954: exiting=0A= 2015-02-21 19:01:57,600 DEBUG [IPC Server handler 24 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 24 on 35954: exiting=0A= 2015-02-21 19:01:57,600 DEBUG [IPC Server handler 25 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 25 on 35954: exiting=0A= 2015-02-21 19:01:57,600 DEBUG [IPC Server handler 26 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 26 on 35954: exiting=0A= 2015-02-21 19:01:57,601 DEBUG [IPC Server handler 27 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 27 on 35954: exiting=0A= 2015-02-21 19:01:57,601 DEBUG [IPC Server handler 28 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 28 on 35954: exiting=0A= 2015-02-21 19:01:57,601 DEBUG [IPC Server handler 29 on 35954] = org.apache.hadoop.ipc.Server: IPC Server handler 29 on 35954: exiting=0A= 2015-02-21 19:01:57,601 INFO [IPC Server listener on 35954] = org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 35954=0A= 2015-02-21 19:01:57,602 DEBUG [Thread-88] = org.apache.hadoop.service.CompositeService: = org.apache.hadoop.mapred.TaskAttemptListenerImpl: stopping services, = size=3D1=0A= 2015-02-21 19:01:57,602 DEBUG [IPC Server Responder] = org.apache.hadoop.ipc.Server: Checking for old call responses.=0A= 2015-02-21 19:01:57,602 DEBUG [Thread-88] = org.apache.hadoop.service.CompositeService: Stopping service #0: Service = TaskHeartbeatHandler in state TaskHeartbeatHandler: STARTED=0A= 2015-02-21 19:01:57,602 DEBUG [Thread-88] = org.apache.hadoop.service.AbstractService: Service: TaskHeartbeatHandler = entered state STOPPED=0A= 2015-02-21 19:01:57,602 INFO [IPC Server Responder] = org.apache.hadoop.ipc.Server: Stopping IPC Server Responder=0A= 2015-02-21 19:01:57,602 INFO [TaskHeartbeatHandler PingChecker] = org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: = TaskHeartbeatHandler thread interrupted=0A= 2015-02-21 19:01:57,602 DEBUG [Thread-88] = org.apache.hadoop.service.CompositeService: Stopping service #1: Service = CommitterEventHandler in state CommitterEventHandler: STARTED=0A= 2015-02-21 19:01:57,602 DEBUG [Thread-88] = org.apache.hadoop.service.AbstractService: Service: = CommitterEventHandler entered state STOPPED=0A= 2015-02-21 19:01:57,603 DEBUG [Thread-88] = org.apache.hadoop.service.CompositeService: Stopping service #0: Service = Dispatcher in state Dispatcher: STARTED=0A= 2015-02-21 19:01:57,603 DEBUG [Thread-88] = org.apache.hadoop.service.AbstractService: Service: Dispatcher entered = state STOPPED=0A= =0A= =0A= ------=_NextPart_000_0058_01D04E73.24958A50 Content-Type: application/octet-stream; name="Comparison Log with Notes.dat" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="Comparison Log with Notes.dat" ----------------------------------------------=0A= -- Both job logs have this=0A= ---------------------------------------------=0A= 2015-02-21 19:01:17,345 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 19:01:17,355 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"9hMeV3S1Fw5yTAznf8uNAf6Fh4xFO8QJytWmSbol\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= 2015-02-21 19:01:17,355 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"9hMeV3S1Fw5yTAznf8uNAf6Fh4xFO8QJytWmSbol\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= =0A= 2015-02-21 19:01:17,356 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB = info:@org.apache.hadoop.security.token.TokenInfo(value=3Dclass = org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector)=0A= 2015-02-21 19:01:17,357 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Use SIMPLE authentication for = protocol ClientNamenodeProtocolPB=0A= 2015-02-21 19:01:17,357 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= auths {=0A= method: "SIMPLE"=0A= mechanism: ""=0A= }=0A= ------------------------------------------------------------------------=0A= Later around line 1578 All lines same from top of log=0A= ------------------------------------------------------------------------ =0A= 2015-02-21 17:54:59,005 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = NEGOTIATE=0A= =0A= 2015-02-21 17:54:59,049 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = NEGOTIATE=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= challenge: = "realm=3D\"default\",nonce=3D\"JP1FdRqZpK7TkUIOEbppp5YH4EhPKzZ/HLsr1Wjb\"= ,qop=3D\"auth\",charset=3Dutf-8,algorithm=3Dmd5-sess"=0A= }=0A= =0A= 2015-02-21 17:54:59,061 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Get token info proto:interface = org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB = info:org.apache.hadoop.yarn.security.SchedulerSecurityInfo$1@26c13868=0A= -------------------------------------------------------------------------= ------------------------------------------------------=0A= -- This next line (1590) has same format by service IP is client IP = (192.168.2.185), whereas in successful log it is the cluster IP = (192.168.2.253)=0A= -------------------------------------------------------------------------= ----------------------------------------------------- =0A= 2015-02-21 17:54:59,062 DEBUG [main] = org.apache.hadoop.yarn.security.AMRMTokenSelector: Looking for a token = with service 192.168.2.185:8030=0A= 2015-02-21 17:54:59,063 DEBUG [main] = org.apache.hadoop.yarn.security.AMRMTokenSelector: Token kind is = YARN_AM_RM_TOKEN and the token's service name is 192.168.2.185:8030=0A= 2015-02-21 17:54:59,071 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Creating SASL = DIGEST-MD5(TOKEN) client to authenticate to service at default=0A= 2015-02-21 17:54:59,074 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Use TOKEN authentication for = protocol ApplicationMasterProtocolPB=0A= 2015-02-21 17:54:59,081 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = username: AAABS63OA3sAAAABAAAAAV8/5o8=3D=0A= 2015-02-21 17:54:59,081 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = userPassword=0A= 2015-02-21 17:54:59,081 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: SASL client callback: setting = realm: default=0A= 2015-02-21 17:54:59,083 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Sending sasl message state: = INITIATE=0A= token: = "charset=3Dutf-8,username=3D\"AAABS63OA3sAAAABAAAAAV8/5o8=3D\",realm=3D\"= default\",nonce=3D\"JP1FdRqZpK7TkUIOEbppp5YH4EhPKzZ/HLsr1Wjb\",nc=3D00000= 001,cnonce=3D\"D8teuBtuFWuryKopW7NGZo3cEKxEcedwVMV/js0H\",digest-uri=3D\"= /default\",maxbuf=3D65536,response=3D7db4bef5221ba40af6f3ccdcd1f478c6,qop= =3Dauth"=0A= auths {=0A= method: "TOKEN"=0A= mechanism: "DIGEST-MD5"=0A= protocol: ""=0A= serverId: "default"=0A= }=0A= =0A= -------------------------------------------------------------------------= -----------------=0A= -- After this point both logs are different until JobHistory work at end=0A= -- I copied here first 30 lines from each with bad execution before = successful one=0A= -------------------------------------------------------------------------= -----------------=0A= =0A= 2015-02-21 17:54:59,105 WARN [main] = org.apache.hadoop.security.UserGroupInformation: = PriviledgedActionException as:cloudera (auth:SIMPLE) = cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.to= ken.SecretManager$InvalidToken): appattempt_1424550134651_0001_000001 = not found in AMRMTokenSecretManager.=0A= 2015-02-21 17:54:59,106 DEBUG [main] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(= Client.java:642)=0A= 2015-02-21 17:54:59,107 WARN [main] org.apache.hadoop.ipc.Client: = Exception encountered while connecting to the server : = org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se= cretManager$InvalidToken): appattempt_1424550134651_0001_000001 not = found in AMRMTokenSecretManager.=0A= 2015-02-21 17:54:59,107 WARN [main] = org.apache.hadoop.security.UserGroupInformation: = PriviledgedActionException as:cloudera (auth:SIMPLE) = cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.to= ken.SecretManager$InvalidToken): appattempt_1424550134651_0001_000001 = not found in AMRMTokenSecretManager.=0A= 2015-02-21 17:54:59,107 DEBUG [main] org.apache.hadoop.ipc.Client: = closing ipc connection to quickstart.cloudera/192.168.2.185:8030: = appattempt_1424550134651_0001_000001 not found in AMRMTokenSecretManager.=0A= org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.Se= cretManager$InvalidToken): appattempt_1424550134651_0001_000001 not = found in AMRMTokenSecretManager.=0A= at = org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:3= 75)=0A= at = org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:5= 52)=0A= at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:367)=0A= at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:717)=0A= at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:713)=0A= at java.security.AccessController.doPrivileged(Native Method)=0A= at javax.security.auth.Subject.doAs(Subject.java:415)=0A= at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1642)=0A= at = org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)=0A= at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)=0A= at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)=0A= at org.apache.hadoop.ipc.Client.call(Client.java:1382)=0A= at org.apache.hadoop.ipc.Client.call(Client.java:1364)=0A= at = org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.= java:206)=0A= at com.sun.proxy.$Proxy35.registerApplicationMaster(Unknown Source)=0A= at = org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClie= ntImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.ja= va:106)=0A= at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)=0A= at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :57)=0A= at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:43)=0A= at java.lang.reflect.Method.invoke(Method.java:606)=0A= at = org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc= ationHandler.java:187)=0A= at = org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH= andler.java:102)=0A= at com.sun.proxy.$Proxy36.registerApplicationMaster(Unknown Source)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunica= tor.java:161)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommu= nicator.java:122)=0A= at = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(R= MContainerAllocator.java:238)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.s= erviceStart(MRAppMaster.java:807)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.= java:120)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.j= ava:1075)=0A= at = org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:147= 8)=0A= at java.security.AccessController.doPrivileged(Native Method)=0A= at javax.security.auth.Subject.doAs(Subject.java:415)=0A= at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1642)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAp= pMaster.java:1474)=0A= at = org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1407= )=0A= =0A= -------------------------------------------------------------------------= --------------------------=0A= -- Here is first 30 - 100 of successful log next lines=0A= -------------------------------------------------------------------------= -------------------------=0A= =0A= 2015-02-21 19:01:19,345 DEBUG [main] = org.apache.hadoop.security.SaslRpcClient: Received SASL message state: = SUCCESS=0A= token: "rspauth=3D2a49fd8d671d2d78306188fe2b8ba06a"=0A= =0A= 2015-02-21 19:01:19,346 DEBUG [main] org.apache.hadoop.ipc.Client: = Negotiated QOP is :auth=0A= 2015-02-21 19:01:19,349 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera sending #10=0A= 2015-02-21 19:01:19,356 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera: starting, having = connections 2=0A= 2015-02-21 19:01:19,371 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8030 from cloudera got value #10=0A= 2015-02-21 19:01:19,371 DEBUG [main] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: registerApplicationMaster = took 52ms=0A= 2015-02-21 19:01:19,431 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: = maxContainerCapability: =0A= 2015-02-21 19:01:19,431 INFO [main] = org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: = root.cloudera=0A= 2015-02-21 19:01:19,433 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = JOB_QUEUE_CHANGED=0A= 2015-02-21 19:01:19,433 DEBUG [IPC Server listener on 59910] = org.apache.hadoop.ipc.Server: Server connection from = 192.168.2.253:57473; # active connections: 1; # queued calls: 0=0A= 2015-02-21 19:01:19,434 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service RMCommunicator is = started=0A= 2015-02-21 19:01:19,434 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter = is started=0A= 2015-02-21 19:01:19,436 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service: = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl = entered state INITED=0A= 2015-02-21 19:01:19,439 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #-3=0A= 2015-02-21 19:01:19,443 INFO [main] = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper = limit on the thread pool size is 500=0A= 2015-02-21 19:01:19,444 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: Successfully authorized userInfo {=0A= effectiveUser: "cloudera"=0A= }=0A= protocol: "org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB"=0A= =0A= 2015-02-21 19:01:19,444 DEBUG [Socket Reader #1 for port 59910] = org.apache.hadoop.ipc.Server: got #66=0A= 2015-02-21 19:01:19,444 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.ipc.Server: IPC Server handler 0 on 59910: = org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB.getJobReport from = 192.168.2.253:57473 Call#66 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER=0A= 2015-02-21 19:01:19,446 DEBUG [IPC Server handler 0 on 59910] = org.apache.hadoop.security.UserGroupInformation: PrivilegedAction = as:cloudera (auth:SIMPLE) = from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)=0A= 2015-02-21 19:01:19,460 INFO [main] = org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: = yarn.client.max-cached-nodemanagers-proxies : 0=0A= 2015-02-21 19:01:19,460 DEBUG [main] org.apache.hadoop.yarn.ipc.YarnRPC: = Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC=0A= 2015-02-21 19:01:19,468 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl is = started=0A= 2015-02-21 19:01:19,468 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter = is started=0A= 2015-02-21 19:01:19,477 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = JobHistoryEventHandler is started=0A= 2015-02-21 19:01:19,478 DEBUG [main] = org.apache.hadoop.service.AbstractService: Service = org.apache.hadoop.mapreduce.v2.app.MRAppMaster is started=0A= 2015-02-21 19:01:19,478 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobStartEvent.EventType: = JOB_START=0A= 2015-02-21 19:01:19,478 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Processing = job_1424550134651_0002 of type JOB_START=0A= 2015-02-21 19:01:19,483 DEBUG [eventHandlingThread] = org.apache.hadoop.hdfs.DFSClient: = /user/cloudera/.staging/job_1424550134651_0002/job_1424550134651_0002_1.j= hist: masked=3Drw-r--r--=0A= 2015-02-21 19:01:19,485 INFO [AsyncDispatcher event handler] = org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: = job_1424550134651_0002Job Transitioned from INITED to SETUP=0A= 2015-02-21 19:01:19,485 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = JOB_INITED=0A= 2015-02-21 19:01:19,485 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.jobhistory.JobHistoryEvent.EventType: = JOB_INFO_CHANGED=0A= 2015-02-21 19:01:19,485 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.commit.CommitterJobSetupEvent.EventTyp= e: JOB_SETUP=0A= 2015-02-21 19:01:19,495 INFO [CommitterEvent Processor #0] = org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: = Processing the event EventType: JOB_SETUP=0A= 2015-02-21 19:01:19,497 DEBUG [CommitterEvent Processor #0] = org.apache.hadoop.hdfs.DFSClient: = /user/cloudera/wordlengths4/_temporary/1: masked=3Drwxr-xr-x=0A= 2015-02-21 19:01:19,499 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #12=0A= 2015-02-21 19:01:19,501 DEBUG [IPC Parameter Sending Thread #0] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera sending #11=0A= 2015-02-21 19:01:19,542 DEBUG [IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera] = org.apache.hadoop.ipc.Client: IPC Client (1541092382) connection to = hadoop0.rdpratti.com/192.168.2.253:8020 from cloudera got value #12=0A= 2015-02-21 19:01:19,542 DEBUG [CommitterEvent Processor #0] = org.apache.hadoop.ipc.ProtobufRpcEngine: Call: mkdirs took 44ms=0A= 2015-02-21 19:01:19,545 DEBUG [AsyncDispatcher event handler] = org.apache.hadoop.yarn.event.AsyncDispatcher: Dispatching the event = org.apache.hadoop.mapreduce.v2.app.job.event.JobSetupCompletedEvent.Event= Type: JOB_SETUP_COMPLETED=0A= =0A= ------=_NextPart_000_0058_01D04E73.24958A50 Content-Type: text/xml; name="hadoop-cluster.xml" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="hadoop-cluster.xml" =0A= =0A= =0A= =0A= =0A= fs.defaultFS=0A= hdfs://hadoop0.rdpratti.com:8020=0A= =0A= =0A= mapreduce.jobtracker.address=0A= hadoop0.rdpratti.com:8032=0A= =0A= =0A= yarn.resourcemanager.address=0A= hadoop0.rdpratti.com:8032=0A= =0A= =0A= mapreduce.map.log.level=0A= DEBUG=0A= =0A= =0A= hadoop.root.logger=0A= DEBUG=0A= =0A= =0A= yarn.root.logger=0A= DEBUG=0A= =0A= =0A= yarn.app.mapreduce.am.log.level=0A= DEBUG=0A= =0A= =0A= =0A= =0A= ------=_NextPart_000_0058_01D04E73.24958A50--