hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bryan Jeffrey <bryan.jeff...@gmail.com>
Subject Re: Error in ORC Compress stream in hive 0.12
Date Fri, 07 Mar 2014 14:08:41 GMT
Philippe,


I am running Hive 0.12.0 & Hadoop 2.2.0.  We were seeing a similar issue.
 I applied the HIVE-5991 patch.  The patch applied, but we still saw the
same behavior.  We then applied HIVE-6382 and observed that the ORC data
now loads correctly.


Regards,


Bryan Jeffrey


On Thu, Mar 6, 2014 at 2:07 PM, Philippe Kernévez <pkernevez@octo.com>wrote:

>  Hi,
>
> I import data from CVS files into an external ORC table.
> I don't have error when using those external table.
> Then I import those table into ORC Table.
> When I execute some requests I have inexplicable errors (but reproducible).
> After some investigation I have :
> The request "select mm_uuid from table where CAMPAIGN_ID IN
> (139870,141873)" is OK
> But the request that is working on a sub scope has an error :
> select mm_uuid from table where CAMPAIGN_ID IN (141873)
> (it's similar with select mm_uuid from publigroupe.denorm where
> CAMPAIGN_ID = 141873
>
> The error is due to :
> Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>  at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>
>
> The full stack is below.
>
> My external table is created like this :
>     CREATE EXTERNAL TABLE table_temp (
>       MM_UUID string,
> ....
>     )
>     ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
>     STORED AS TEXTFILE
>     LOCATION 'hdfs:///source/data';
>
> My ORC Table is created like this :
>     CREATE TABLE table (
>       MM_UUID string,
> ...
>     )
>     STORED AS orc;
>
> My import is done with :
> INSERT OVERWRITE TABLE table
>     SELECT
>         MM_UUID,
> ...
>     FROM table_temp
>     WHERE MM_TIMESTAMP IS NOT NULL;
>
> If I import only the concerned campaign (CAMPAIGN_ID = 141873) into my
> table, I don't have failure any more. So I suppose that it's not due to my
> data.
>
> If I import all data into an ORC Table without compression, it's seems
> that I don't have this error any more...
>
> Do you have any explanation ?
>
> Thanks for you help,
> Philippe
>
>
>
> And now the full log...
>
>          2014-03-06 09:52:51,618 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for
> application appattempt_1393951799033_0207_000001
> 2014-03-06 09:52:51,737 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2014-03-06 09:52:51,748 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 2014-03-06 09:52:51,806 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2014-03-06 09:52:51,806 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN,
> Service: , Ident:
> (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@49d8c528)
> 2014-03-06 09:52:51,819 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max attempts:
> 2 for application: 207. Attempt num: 1 is last retry: false
> 2014-03-06 09:52:51,871 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 2014-03-06 09:52:51,880 WARN [main] org.apache.hadoop.conf.Configuration:
> job.xml:an attempt to override final parameter:
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 2014-03-06 09:52:52,129 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in
> config org.apache.hadoop.hive.shims.HadoopShimsSecure$NullOutputCommitter
> 2014-03-06 09:52:52,130 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is
> org.apache.hadoop.hive.shims.HadoopShimsSecure$NullOutputCommitter
> 2014-03-06 09:52:52,152 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.jobhistory.EventType for class
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> 2014-03-06 09:52:52,152 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
> 2014-03-06 09:52:52,153 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
> 2014-03-06 09:52:52,153 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
> 2014-03-06 09:52:52,153 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
> 2014-03-06 09:52:52,154 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
> 2014-03-06 09:52:52,154 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for
> class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
> 2014-03-06 09:52:52,155 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for
> class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
> 2014-03-06 09:52:52,198 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
> 2014-03-06 09:52:52,324 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2014-03-06 09:52:52,349 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 60 second(s).
> 2014-03-06 09:52:52,349 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics
> system started
> 2014-03-06 09:52:52,356 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for
> job_1393951799033_0207 to jobTokenSecretManager
> 2014-03-06 09:52:52,405 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing
> job_1393951799033_0207 because: not enabled; too many maps; too much input;
> too much RAM;
> 2014-03-06 09:52:52,420 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job
> job_1393951799033_0207 = 5763483002. Number of splits = 20
> 2014-03-06 09:52:52,420 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for
> job job_1393951799033_0207 = 0
> 2014-03-06 09:52:52,420 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1393951799033_0207Job Transitioned from NEW to INITED
> 2014-03-06 09:52:52,421 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching
> normal, non-uberized, multi-container job job_1393951799033_0207.
> 2014-03-06 09:52:52,444 INFO [Socket Reader #1 for port 38294]
> org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 38294
> 2014-03-06 09:52:52,454 INFO [main]
> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding
> protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
> 2014-03-06 09:52:52,454 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2014-03-06 09:52:52,454 INFO [IPC Server listener on 38294]
> org.apache.hadoop.ipc.Server: IPC Server listener on 38294: starting
> 2014-03-06 09:52:52,455 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated
> MRClientService at pgnode3/148.251.67.165:38294
> 2014-03-06 09:52:52,476 INFO [main] org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2014-03-06 09:52:52,499 INFO [main] org.apache.hadoop.http.HttpServer:
> Added global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2014-03-06 09:52:52,501 INFO [main] org.apache.hadoop.http.HttpServer:
> Added filter AM_PROXY_FILTER
> (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> context mapreduce
> 2014-03-06 09:52:52,501 INFO [main] org.apache.hadoop.http.HttpServer:
> Added filter AM_PROXY_FILTER
> (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to
> context static
> 2014-03-06 09:52:52,502 INFO [main] org.apache.hadoop.http.HttpServer:
> adding path spec: /mapreduce/*
> 2014-03-06 09:52:52,502 INFO [main] org.apache.hadoop.http.HttpServer:
> adding path spec: /ws/*
> 2014-03-06 09:52:52,503 INFO [main] org.apache.hadoop.http.HttpServer:
> Jetty bound to port 40852
> 2014-03-06 09:52:52,503 INFO [main] org.mortbay.log: jetty-6.1.26
> 2014-03-06 09:52:52,516 INFO [main] org.mortbay.log: Extract
> jar:file:/usr/lib/hadoop-yarn/hadoop-yarn-common-2.2.0.2.0.6.0-102.jar!/webapps/mapreduce
> to /tmp/Jetty_0_0_0_0_40852_mapreduce____gtdi80/webapp
> 2014-03-06 09:52:52,633 INFO [main] org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:40852
> 2014-03-06 09:52:52,633 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
> Web app /mapreduce started at 40852
> 2014-03-06 09:52:52,792 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
> Registered webapp guice modules
> 2014-03-06 09:52:52,795 INFO [Socket Reader #1 for port 57703]
> org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 57703
> 2014-03-06 09:52:52,797 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2014-03-06 09:52:52,797 INFO [IPC Server listener on 57703]
> org.apache.hadoop.ipc.Server: IPC Server listener on 57703: starting
> 2014-03-06 09:52:52,806 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> nodeBlacklistingEnabled:true
> 2014-03-06 09:52:52,806 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> maxTaskFailuresPerNode is 3
> 2014-03-06 09:52:52,806 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
> blacklistDisablePercent is 33
> 2014-03-06 09:52:52,819 INFO [main] org.apache.hadoop.yarn.client.RMProxy:
> Connecting to ResourceManager at pgmaster/148.251.67.162:8030
> 2014-03-06 09:52:52,843 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> maxContainerCapability: 28000
> 2014-03-06 09:52:52,845 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper
> limit on the thread pool size is 500
> 2014-03-06 09:52:52,846 INFO [main]
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> yarn.client.max-nodemanagers-proxies : 500
> 2014-03-06 09:52:52,851 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1393951799033_0207Job Transitioned from INITED to SETUP
> 2014-03-06 09:52:52,852 INFO [CommitterEvent Processor #0]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: JOB_SETUP
> 2014-03-06 09:52:52,853 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1393951799033_0207Job Transitioned from SETUP to RUNNING
> 2014-03-06 09:52:52,863 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:52,866 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000000 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,866 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode2 to /default-rack
> 2014-03-06 09:52:52,866 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000001 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,866 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:52,867 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000002 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,867 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:52:52,867 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000003 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,867 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode2 to /default-rack
> 2014-03-06 09:52:52,867 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000004 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,867 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:52:52,867 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000005 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,867 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:52:52,867 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000006 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,867 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:52,867 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000007 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,867 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode2 to /default-rack
> 2014-03-06 09:52:52,868 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000008 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,868 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode2 to /default-rack
> 2014-03-06 09:52:52,868 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000009 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,868 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:52,868 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000010 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,868 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:52:52,868 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000011 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,868 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:52,868 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000012 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,868 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode2 to /default-rack
> 2014-03-06 09:52:52,868 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000013 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,869 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:52:52,869 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000014 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,869 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:52,869 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000015 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,869 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode2 to /default-rack
> 2014-03-06 09:52:52,869 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000016 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,869 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:52:52,870 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000017 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,870 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:52,870 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000018 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,870 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000019 Task Transitioned from NEW to SCHEDULED
> 2014-03-06 09:52:52,870 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000000_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000001_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000002_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000003_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000004_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000005_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000007_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000008_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000009_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000010_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000011_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000012_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000013_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000014_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000015_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,871 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000016_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,872 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000017_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,872 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000018_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,872 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000019_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:52,872 INFO [Thread-48]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> mapResourceReqt:1536
> 2014-03-06 09:52:52,907 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer
> setup for JobId: job_1393951799033_0207, File:
> hdfs://pgmaster:8020/user/hue/.staging/job_1393951799033_0207/job_1393951799033_0207_1.jhist
> 2014-03-06 09:52:52,923 INFO [eventHandlingThread]
> org.apache.hadoop.conf.Configuration.deprecation: user.name is
> deprecated. Instead, use mapreduce.job.user.name
> 2014-03-06 09:52:53,844 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:0 ScheduledMaps:20 ScheduledReds:0 AssignedMaps:0
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
> HostLocal:0 RackLocal:0
> 2014-03-06 09:52:53,864 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1393951799033_0207: ask=5 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:79872, vCores:0> knownNMs=3
> 2014-03-06 09:52:53,899 INFO [IPC Server handler 0 on 38294]
> org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Getting task
> report for MAP   job_1393951799033_0207. Report-size will be 20
> 2014-03-06 09:52:53,943 INFO [IPC Server handler 0 on 38294]
> org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Getting task
> report for REDUCE   job_1393951799033_0207. Report-size will be 0
> 2014-03-06 09:52:54,890 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 20
> 2014-03-06 09:52:54,890 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000002 to
> attempt_1393951799033_0207_m_000003_0
> 2014-03-06 09:52:54,891 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000003 to
> attempt_1393951799033_0207_m_000005_0
> 2014-03-06 09:52:54,891 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000004 to
> attempt_1393951799033_0207_m_000006_0
> 2014-03-06 09:52:54,891 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000005 to
> attempt_1393951799033_0207_m_000011_0
> 2014-03-06 09:52:54,891 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000006 to
> attempt_1393951799033_0207_m_000014_0
> 2014-03-06 09:52:54,892 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000007 to
> attempt_1393951799033_0207_m_000017_0
> 2014-03-06 09:52:54,892 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000008 to
> attempt_1393951799033_0207_m_000001_0
> 2014-03-06 09:52:54,892 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000009 to
> attempt_1393951799033_0207_m_000004_0
> 2014-03-06 09:52:54,892 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000010 to
> attempt_1393951799033_0207_m_000008_0
> 2014-03-06 09:52:54,892 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000011 to
> attempt_1393951799033_0207_m_000009_0
> 2014-03-06 09:52:54,892 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000012 to
> attempt_1393951799033_0207_m_000013_0
> 2014-03-06 09:52:54,892 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000013 to
> attempt_1393951799033_0207_m_000016_0
> 2014-03-06 09:52:54,892 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000014 to
> attempt_1393951799033_0207_m_000000_0
> 2014-03-06 09:52:54,892 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000015 to
> attempt_1393951799033_0207_m_000002_0
> 2014-03-06 09:52:54,892 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000016 to
> attempt_1393951799033_0207_m_000007_0
> 2014-03-06 09:52:54,892 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000017 to
> attempt_1393951799033_0207_m_000010_0
> 2014-03-06 09:52:54,892 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000018 to
> attempt_1393951799033_0207_m_000012_0
> 2014-03-06 09:52:54,893 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000019 to
> attempt_1393951799033_0207_m_000015_0
> 2014-03-06 09:52:54,893 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000020 to
> attempt_1393951799033_0207_m_000018_0
> 2014-03-06 09:52:54,893 INFO [RMCommunicator Allocator]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:54,893 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000021 to
> attempt_1393951799033_0207_m_000019_0
> 2014-03-06 09:52:54,893 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:20
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:20 ContRel:0
> HostLocal:19 RackLocal:0
> 2014-03-06 09:52:54,903 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:52:54,911 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
> file on the remote FS is
> hdfs://pgmaster:8020/user/hue/.staging/job_1393951799033_0207/job.jar
> 2014-03-06 09:52:54,913 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf
> file on the remote FS is /user/hue/.staging/job_1393951799033_0207/job.xml
>  2014-03-06 09:52:54,916 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
> tokens and #1 secret keys for NM use for launching container
> 2014-03-06 09:52:54,916 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
> containertokens_dob is 1
> 2014-03-06 09:52:54,916 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
> shuffle token in serviceData
> 2014-03-06 09:52:54,931 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000003_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,932 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:52:54,932 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000005_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,932 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:52:54,932 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,933 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:52:54,933 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000011_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,933 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:52:54,933 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000014_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,933 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:52:54,934 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000017_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,934 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode2 to /default-rack
> 2014-03-06 09:52:54,934 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000001_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,934 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode2 to /default-rack
> 2014-03-06 09:52:54,934 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000004_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,934 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode2 to /default-rack
> 2014-03-06 09:52:54,935 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000008_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,935 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode2 to /default-rack
> 2014-03-06 09:52:54,935 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000009_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,935 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode2 to /default-rack
> 2014-03-06 09:52:54,935 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000013_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,936 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode2 to /default-rack
> 2014-03-06 09:52:54,936 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000016_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,936 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:54,936 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000000_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,936 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:54,937 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000002_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,937 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:54,937 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000007_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,937 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:54,938 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000010_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,938 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:54,938 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000012_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,938 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:54,938 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000015_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,938 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:54,939 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000018_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,939 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode1 to /default-rack
> 2014-03-06 09:52:54,939 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000019_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:52:54,940 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000002 taskAttempt
> attempt_1393951799033_0207_m_000003_0
> 2014-03-06 09:52:54,940 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000003 taskAttempt
> attempt_1393951799033_0207_m_000005_0
> 2014-03-06 09:52:54,940 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000004 taskAttempt
> attempt_1393951799033_0207_m_000006_0
> 2014-03-06 09:52:54,940 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000005 taskAttempt
> attempt_1393951799033_0207_m_000011_0
> 2014-03-06 09:52:54,940 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000006 taskAttempt
> attempt_1393951799033_0207_m_000014_0
> 2014-03-06 09:52:54,941 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000007 taskAttempt
> attempt_1393951799033_0207_m_000017_0
> 2014-03-06 09:52:54,941 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000008 taskAttempt
> attempt_1393951799033_0207_m_000001_0
> 2014-03-06 09:52:54,941 INFO [ContainerLauncher #7]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000009 taskAttempt
> attempt_1393951799033_0207_m_000004_0
> 2014-03-06 09:52:54,941 INFO [ContainerLauncher #8]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000010 taskAttempt
> attempt_1393951799033_0207_m_000008_0
> 2014-03-06 09:52:54,941 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000011 taskAttempt
> attempt_1393951799033_0207_m_000009_0
> 2014-03-06 09:52:54,941 INFO [ContainerLauncher #7]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000004_0
> 2014-03-06 09:52:54,941 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000014_0
> 2014-03-06 09:52:54,941 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000017_0
> 2014-03-06 09:52:54,941 INFO [ContainerLauncher #8]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000008_0
> 2014-03-06 09:52:54,941 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000011_0
> 2014-03-06 09:52:54,941 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000005_0
> 2014-03-06 09:52:54,941 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000009_0
> 2014-03-06 09:52:54,941 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000006_0
> 2014-03-06 09:52:54,941 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000001_0
> 2014-03-06 09:52:54,942 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000003_0
> 2014-03-06 09:52:54,942 INFO [ContainerLauncher #7]
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> Opening proxy : pgnode2:45454
> 2014-03-06 09:52:54,950 INFO [ContainerLauncher #0]
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> Opening proxy : pgnode3:45454
> 2014-03-06 09:52:54,970 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000005_0
> : 13562
> 2014-03-06 09:52:54,970 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000011_0
> : 13562
> 2014-03-06 09:52:54,970 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000012 taskAttempt
> attempt_1393951799033_0207_m_000013_0
> 2014-03-06 09:52:54,970 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000013 taskAttempt
> attempt_1393951799033_0207_m_000016_0
> 2014-03-06 09:52:54,970 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000013_0
> 2014-03-06 09:52:54,970 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000016_0
> 2014-03-06 09:52:54,971 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000005_0] using containerId:
> [container_1393951799033_0207_01_000003 on NM: [pgnode3:45454]
> 2014-03-06 09:52:54,973 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000005_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:54,973 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000011_0] using containerId:
> [container_1393951799033_0207_01_000005 on NM: [pgnode3:45454]
> 2014-03-06 09:52:54,973 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000011_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:54,974 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000005 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:54,974 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000011 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:54,976 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000009_0
> : 13562
> 2014-03-06 09:52:54,976 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000014 taskAttempt
> attempt_1393951799033_0207_m_000000_0
> 2014-03-06 09:52:54,976 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000000_0
> 2014-03-06 09:52:54,976 INFO [ContainerLauncher #9]
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
> Opening proxy : pgnode1:45454
> 2014-03-06 09:52:54,977 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000009_0] using containerId:
> [container_1393951799033_0207_01_000011 on NM: [pgnode2:45454]
> 2014-03-06 09:52:54,977 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000009_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:54,977 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000009 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:54,985 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000000_0
> : 13562
> 2014-03-06 09:52:54,985 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000015 taskAttempt
> attempt_1393951799033_0207_m_000002_0
> 2014-03-06 09:52:54,985 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000002_0
> 2014-03-06 09:52:54,985 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000000_0] using containerId:
> [container_1393951799033_0207_01_000014 on NM: [pgnode1:45454]
> 2014-03-06 09:52:54,986 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000000_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:54,986 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000000 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:54,987 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000002_0
> : 13562
> 2014-03-06 09:52:54,987 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000016 taskAttempt
> attempt_1393951799033_0207_m_000007_0
> 2014-03-06 09:52:54,987 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000007_0
> 2014-03-06 09:52:54,987 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000002_0] using containerId:
> [container_1393951799033_0207_01_000015 on NM: [pgnode1:45454]
> 2014-03-06 09:52:54,987 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000002_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:54,987 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000002 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:54,988 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000007_0
> : 13562
> 2014-03-06 09:52:54,989 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000017 taskAttempt
> attempt_1393951799033_0207_m_000010_0
> 2014-03-06 09:52:54,989 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000010_0
> 2014-03-06 09:52:54,989 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000007_0] using containerId:
> [container_1393951799033_0207_01_000016 on NM: [pgnode1:45454]
> 2014-03-06 09:52:54,989 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000007_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:54,989 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000007 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:54,990 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000010_0
> : 13562
> 2014-03-06 09:52:54,990 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000018 taskAttempt
> attempt_1393951799033_0207_m_000012_0
> 2014-03-06 09:52:54,990 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000012_0
> 2014-03-06 09:52:54,990 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000010_0] using containerId:
> [container_1393951799033_0207_01_000017 on NM: [pgnode1:45454]
> 2014-03-06 09:52:54,991 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000010_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:54,991 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000010 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:54,992 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000012_0
> : 13562
> 2014-03-06 09:52:54,992 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000019 taskAttempt
> attempt_1393951799033_0207_m_000015_0
> 2014-03-06 09:52:54,992 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000015_0
> 2014-03-06 09:52:54,992 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000012_0] using containerId:
> [container_1393951799033_0207_01_000018 on NM: [pgnode1:45454]
> 2014-03-06 09:52:54,993 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000012_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:54,993 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000012 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:54,993 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000015_0
> : 13562
> 2014-03-06 09:52:54,993 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000020 taskAttempt
> attempt_1393951799033_0207_m_000018_0
> 2014-03-06 09:52:54,993 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000018_0
> 2014-03-06 09:52:54,993 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000015_0] using containerId:
> [container_1393951799033_0207_01_000019 on NM: [pgnode1:45454]
> 2014-03-06 09:52:54,994 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000015_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:54,994 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000015 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:54,995 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000018_0
> : 13562
> 2014-03-06 09:52:54,995 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000021 taskAttempt
> attempt_1393951799033_0207_m_000019_0
> 2014-03-06 09:52:54,995 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000019_0
> 2014-03-06 09:52:54,995 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000018_0] using containerId:
> [container_1393951799033_0207_01_000020 on NM: [pgnode1:45454]
> 2014-03-06 09:52:54,995 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000018_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:54,995 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000018 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:54,996 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000019_0
> : 13562
> 2014-03-06 09:52:54,996 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000019_0] using containerId:
> [container_1393951799033_0207_01_000021 on NM: [pgnode1:45454]
> 2014-03-06 09:52:54,996 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000019_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:54,996 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000019 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:55,007 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000006_0
> : 13562
> 2014-03-06 09:52:55,007 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000014_0
> : 13562
> 2014-03-06 09:52:55,007 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000003_0
> : 13562
> 2014-03-06 09:52:55,007 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000017_0
> : 13562
> 2014-03-06 09:52:55,007 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000006_0] using containerId:
> [container_1393951799033_0207_01_000004 on NM: [pgnode3:45454]
> 2014-03-06 09:52:55,008 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:55,008 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000014_0] using containerId:
> [container_1393951799033_0207_01_000006 on NM: [pgnode3:45454]
> 2014-03-06 09:52:55,008 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000014_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:55,008 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000003_0] using containerId:
> [container_1393951799033_0207_01_000002 on NM: [pgnode3:45454]
> 2014-03-06 09:52:55,009 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000003_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:55,009 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000017_0] using containerId:
> [container_1393951799033_0207_01_000007 on NM: [pgnode3:45454]
> 2014-03-06 09:52:55,009 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000017_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:55,009 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000006 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:55,009 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000014 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:55,009 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000003 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:55,009 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000017 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:55,018 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000013_0
> : 13562
> 2014-03-06 09:52:55,018 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000001_0
> : 13562
> 2014-03-06 09:52:55,018 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000016_0
> : 13562
> 2014-03-06 09:52:55,018 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000013_0] using containerId:
> [container_1393951799033_0207_01_000012 on NM: [pgnode2:45454]
> 2014-03-06 09:52:55,018 INFO [ContainerLauncher #8]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000008_0
> : 13562
> 2014-03-06 09:52:55,018 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000013_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:55,019 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000001_0] using containerId:
> [container_1393951799033_0207_01_000008 on NM: [pgnode2:45454]
> 2014-03-06 09:52:55,019 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000001_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:55,019 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000016_0] using containerId:
> [container_1393951799033_0207_01_000013 on NM: [pgnode2:45454]
> 2014-03-06 09:52:55,019 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000016_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:55,019 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000008_0] using containerId:
> [container_1393951799033_0207_01_000010 on NM: [pgnode2:45454]
> 2014-03-06 09:52:55,020 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000008_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:55,020 INFO [ContainerLauncher #7]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000004_0
> : 13562
> 2014-03-06 09:52:55,020 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000013 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:55,020 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000001 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:55,020 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000016 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:55,020 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000008 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:55,020 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000004_0] using containerId:
> [container_1393951799033_0207_01_000009 on NM: [pgnode2:45454]
> 2014-03-06 09:52:55,021 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000004_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:52:55,021 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000004 Task Transitioned from SCHEDULED to RUNNING
> 2014-03-06 09:52:55,896 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1393951799033_0207: ask=5 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:49152, vCores:-20> knownNMs=3
> 2014-03-06 09:52:56,298 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:56,344 INFO [IPC Server handler 0 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000006 asked for a task
> 2014-03-06 09:52:56,344 INFO [IPC Server handler 0 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000006 given task:
> attempt_1393951799033_0207_m_000014_0
> 2014-03-06 09:52:56,537 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:56,562 INFO [IPC Server handler 1 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000002 asked for a task
> 2014-03-06 09:52:56,562 INFO [IPC Server handler 1 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000002 given task:
> attempt_1393951799033_0207_m_000003_0
> 2014-03-06 09:52:56,679 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:56,711 INFO [IPC Server handler 2 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000009 asked for a task
> 2014-03-06 09:52:56,711 INFO [IPC Server handler 2 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000009 given task:
> attempt_1393951799033_0207_m_000004_0
> 2014-03-06 09:52:56,759 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:56,788 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:56,789 INFO [IPC Server handler 3 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000012 asked for a task
> 2014-03-06 09:52:56,789 INFO [IPC Server handler 3 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000012 given task:
> attempt_1393951799033_0207_m_000013_0
> 2014-03-06 09:52:56,826 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:56,839 INFO [IPC Server handler 4 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000003 asked for a task
> 2014-03-06 09:52:56,839 INFO [IPC Server handler 4 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000003 given task:
> attempt_1393951799033_0207_m_000005_0
> 2014-03-06 09:52:56,850 INFO [IPC Server handler 5 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000007 asked for a task
> 2014-03-06 09:52:56,850 INFO [IPC Server handler 5 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000007 given task:
> attempt_1393951799033_0207_m_000017_0
> 2014-03-06 09:52:56,945 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:56,980 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:56,984 INFO [IPC Server handler 6 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000013 asked for a task
> 2014-03-06 09:52:56,984 INFO [IPC Server handler 6 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000013 given task:
> attempt_1393951799033_0207_m_000016_0
> 2014-03-06 09:52:57,019 INFO [IPC Server handler 7 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000004 asked for a task
> 2014-03-06 09:52:57,019 INFO [IPC Server handler 7 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000004 given task:
> attempt_1393951799033_0207_m_000006_0
> 2014-03-06 09:52:57,088 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:57,093 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:57,103 INFO [IPC Server handler 8 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000010 asked for a task
> 2014-03-06 09:52:57,103 INFO [IPC Server handler 8 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000010 given task:
> attempt_1393951799033_0207_m_000008_0
> 2014-03-06 09:52:57,149 INFO [IPC Server handler 9 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000005 asked for a task
> 2014-03-06 09:52:57,149 INFO [IPC Server handler 9 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000005 given task:
> attempt_1393951799033_0207_m_000011_0
> 2014-03-06 09:52:57,172 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:57,214 INFO [IPC Server handler 10 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000011 asked for a task
> 2014-03-06 09:52:57,214 INFO [IPC Server handler 10 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000011 given task:
> attempt_1393951799033_0207_m_000009_0
> 2014-03-06 09:52:57,275 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:57,308 INFO [IPC Server handler 11 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000008 asked for a task
> 2014-03-06 09:52:57,308 INFO [IPC Server handler 11 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000008 given task:
> attempt_1393951799033_0207_m_000001_0
> 2014-03-06 09:52:57,344 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:57,380 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:57,387 INFO [IPC Server handler 12 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000014 asked for a task
> 2014-03-06 09:52:57,387 INFO [IPC Server handler 12 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000014 given task:
> attempt_1393951799033_0207_m_000000_0
> 2014-03-06 09:52:57,398 INFO [IPC Server handler 13 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000021 asked for a task
> 2014-03-06 09:52:57,398 INFO [IPC Server handler 13 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000021 given task:
> attempt_1393951799033_0207_m_000019_0
> 2014-03-06 09:52:57,417 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:57,430 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:57,451 INFO [IPC Server handler 14 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000016 asked for a task
> 2014-03-06 09:52:57,451 INFO [IPC Server handler 14 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000016 given task:
> attempt_1393951799033_0207_m_000007_0
> 2014-03-06 09:52:57,452 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:57,453 INFO [IPC Server handler 15 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000017 asked for a task
> 2014-03-06 09:52:57,454 INFO [IPC Server handler 15 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000017 given task:
> attempt_1393951799033_0207_m_000010_0
> 2014-03-06 09:52:57,485 INFO [IPC Server handler 16 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000015 asked for a task
> 2014-03-06 09:52:57,485 INFO [IPC Server handler 16 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000015 given task:
> attempt_1393951799033_0207_m_000002_0
> 2014-03-06 09:52:57,509 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:57,572 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:57,583 INFO [IPC Server handler 17 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000019 asked for a task
> 2014-03-06 09:52:57,583 INFO [IPC Server handler 17 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000019 given task:
> attempt_1393951799033_0207_m_000015_0
> 2014-03-06 09:52:57,584 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:52:57,599 INFO [IPC Server handler 18 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000018 asked for a task
> 2014-03-06 09:52:57,599 INFO [IPC Server handler 18 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000018 given task:
> attempt_1393951799033_0207_m_000012_0
> 2014-03-06 09:52:57,631 INFO [IPC Server handler 19 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000020 asked for a task
> 2014-03-06 09:52:57,631 INFO [IPC Server handler 19 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000020 given task:
> attempt_1393951799033_0207_m_000018_0
> 2014-03-06 09:52:59,009 INFO [IPC Server handler 20 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000014_0
> 2014-03-06 09:52:59,009 INFO [IPC Server handler 20 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000014_0 is : 0.0
> 2014-03-06 09:52:59,025 INFO [IPC Server handler 21 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000004_0
> 2014-03-06 09:52:59,025 INFO [IPC Server handler 21 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000004_0 is : 0.0
> 2014-03-06 09:52:59,058 INFO [IPC Server handler 22 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000013_0
> 2014-03-06 09:52:59,059 INFO [IPC Server handler 22 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000013_0 is : 0.0
> 2014-03-06 09:52:59,075 INFO [IPC Server handler 23 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000017_0
> 2014-03-06 09:52:59,075 INFO [IPC Server handler 23 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000017_0 is : 0.0
> 2014-03-06 09:52:59,095 INFO [IPC Server handler 24 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000004_0
> 2014-03-06 09:52:59,095 INFO [IPC Server handler 24 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000004_0 is : 1.0
> 2014-03-06 09:52:59,100 INFO [IPC Server handler 25 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000004_0
> 2014-03-06 09:52:59,101 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000004_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:52:59,101 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000009 taskAttempt
> attempt_1393951799033_0207_m_000004_0
> 2014-03-06 09:52:59,101 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000004_0
> 2014-03-06 09:52:59,109 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000004_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:52:59,112 INFO [IPC Server handler 26 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000014_0
> 2014-03-06 09:52:59,112 INFO [IPC Server handler 26 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000014_0 is : 1.0
> 2014-03-06 09:52:59,116 INFO [IPC Server handler 27 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000008_0
> 2014-03-06 09:52:59,119 INFO [IPC Server handler 27 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000008_0 is : 0.0
> 2014-03-06 09:52:59,122 INFO [IPC Server handler 28 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000014_0
> 2014-03-06 09:52:59,123 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000004_0
> 2014-03-06 09:52:59,124 INFO [IPC Server handler 29 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000013_0
> 2014-03-06 09:52:59,124 INFO [IPC Server handler 29 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000013_0 is : 1.0
> 2014-03-06 09:52:59,126 INFO [IPC Server handler 0 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000013_0
> 2014-03-06 09:52:59,127 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000004 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:52:59,128 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000014_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:52:59,129 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
> 2014-03-06 09:52:59,130 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000013_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:52:59,130 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000006 taskAttempt
> attempt_1393951799033_0207_m_000014_0
> 2014-03-06 09:52:59,130 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000014_0
> 2014-03-06 09:52:59,130 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000012 taskAttempt
> attempt_1393951799033_0207_m_000013_0
> 2014-03-06 09:52:59,130 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000013_0
> 2014-03-06 09:52:59,132 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000013_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:52:59,133 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000014_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:52:59,133 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000013_0
> 2014-03-06 09:52:59,133 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000013 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:52:59,133 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000014_0
> 2014-03-06 09:52:59,133 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000014 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:52:59,133 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 2
> 2014-03-06 09:52:59,134 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 3
> 2014-03-06 09:52:59,153 INFO [IPC Server handler 1 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000017_0
> 2014-03-06 09:52:59,153 INFO [IPC Server handler 1 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000017_0 is : 1.0
> 2014-03-06 09:52:59,155 INFO [IPC Server handler 2 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000017_0
> 2014-03-06 09:52:59,155 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000017_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:52:59,155 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000007 taskAttempt
> attempt_1393951799033_0207_m_000017_0
> 2014-03-06 09:52:59,155 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000017_0
> 2014-03-06 09:52:59,157 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000017_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:52:59,157 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000017_0
> 2014-03-06 09:52:59,157 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000017 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:52:59,157 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 4
> 2014-03-06 09:52:59,174 INFO [IPC Server handler 3 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000005_0
> 2014-03-06 09:52:59,174 INFO [IPC Server handler 3 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000005_0 is : 0.0
> 2014-03-06 09:52:59,182 INFO [IPC Server handler 4 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000008_0
> 2014-03-06 09:52:59,183 INFO [IPC Server handler 4 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000008_0 is : 1.0
> 2014-03-06 09:52:59,184 INFO [IPC Server handler 5 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000008_0
> 2014-03-06 09:52:59,185 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000008_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:52:59,185 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000010 taskAttempt
> attempt_1393951799033_0207_m_000008_0
> 2014-03-06 09:52:59,185 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000008_0
> 2014-03-06 09:52:59,187 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000008_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:52:59,187 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000008_0
> 2014-03-06 09:52:59,187 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000008 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:52:59,187 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 5
> 2014-03-06 09:52:59,208 INFO [IPC Server handler 6 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000011_0
> 2014-03-06 09:52:59,208 INFO [IPC Server handler 6 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000011_0 is : 0.0
> 2014-03-06 09:52:59,221 INFO [IPC Server handler 7 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000005_0
> 2014-03-06 09:52:59,222 INFO [IPC Server handler 7 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000005_0 is : 1.0
> 2014-03-06 09:52:59,224 INFO [IPC Server handler 8 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000005_0
> 2014-03-06 09:52:59,224 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000005_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:52:59,224 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000003 taskAttempt
> attempt_1393951799033_0207_m_000005_0
> 2014-03-06 09:52:59,225 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000005_0
> 2014-03-06 09:52:59,227 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000005_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:52:59,227 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000005_0
> 2014-03-06 09:52:59,227 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000005 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:52:59,227 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 6
> 2014-03-06 09:52:59,240 INFO [IPC Server handler 9 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000009_0
> 2014-03-06 09:52:59,241 INFO [IPC Server handler 9 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000009_0 is : 0.0
> 2014-03-06 09:52:59,281 INFO [IPC Server handler 10 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000011_0
> 2014-03-06 09:52:59,281 INFO [IPC Server handler 10 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000011_0 is : 1.0
> 2014-03-06 09:52:59,283 INFO [IPC Server handler 11 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000011_0
> 2014-03-06 09:52:59,283 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000011_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:52:59,283 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000005 taskAttempt
> attempt_1393951799033_0207_m_000011_0
> 2014-03-06 09:52:59,283 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000011_0
> 2014-03-06 09:52:59,285 INFO [IPC Server handler 12 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000009_0
> 2014-03-06 09:52:59,285 INFO [IPC Server handler 12 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000009_0 is : 1.0
> 2014-03-06 09:52:59,285 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000011_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:52:59,285 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000011_0
> 2014-03-06 09:52:59,285 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000011 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:52:59,285 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 7
> 2014-03-06 09:52:59,287 INFO [IPC Server handler 13 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000009_0
> 2014-03-06 09:52:59,287 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000009_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:52:59,287 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000011 taskAttempt
> attempt_1393951799033_0207_m_000009_0
> 2014-03-06 09:52:59,287 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000009_0
> 2014-03-06 09:52:59,289 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000009_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:52:59,289 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000009_0
> 2014-03-06 09:52:59,289 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000009 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:52:59,289 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 8
> 2014-03-06 09:52:59,667 INFO [IPC Server handler 14 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000016_0
> 2014-03-06 09:52:59,667 INFO [IPC Server handler 14 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000016_0 is : 0.0
> 2014-03-06 09:52:59,707 INFO [IPC Server handler 15 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000016_0
> 2014-03-06 09:52:59,707 INFO [IPC Server handler 15 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000016_0 is : 1.0
> 2014-03-06 09:52:59,709 INFO [IPC Server handler 16 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000016_0
> 2014-03-06 09:52:59,709 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000016_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:52:59,710 INFO [ContainerLauncher #8]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000013 taskAttempt
> attempt_1393951799033_0207_m_000016_0
> 2014-03-06 09:52:59,710 INFO [ContainerLauncher #8]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000016_0
> 2014-03-06 09:52:59,711 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000016_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:52:59,711 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000016_0
> 2014-03-06 09:52:59,711 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000016 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:52:59,711 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 9
> 2014-03-06 09:52:59,895 INFO [IPC Server handler 17 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000006_0
> 2014-03-06 09:52:59,895 INFO [IPC Server handler 17 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000006_0 is : 0.0
> 2014-03-06 09:52:59,897 FATAL [IPC Server handler 18 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task:
> attempt_1393951799033_0207_m_000006_0 - exited : java.io.IOException:
> java.io.IOException: java.lang.IndexOutOfBoundsException
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>  at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
>  at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
> at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>  at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>  ... 11 more
> Caused by: java.lang.IndexOutOfBoundsException
> at java.nio.ByteBuffer.wrap(ByteBuffer.java:352)
>  at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:180)
> at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:197)
>  at
> org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readInts(SerializationUtils.java:450)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readDirectValues(RunLengthIntegerReaderV2.java:239)
>  at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:52)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:287)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringDictionaryTreeReader.next(RecordReaderImpl.java:1046)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringTreeReader.next(RecordReaderImpl.java:884)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1157)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2196)
>  at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:108)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:57)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
> ... 15 more
>
> 2014-03-06 09:52:59,897 INFO [IPC Server handler 18 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from
> attempt_1393951799033_0207_m_000006_0: Error: java.io.IOException:
> java.io.IOException: java.lang.IndexOutOfBoundsException
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>  at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
>  at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
> at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>  at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>  ... 11 more
> Caused by: java.lang.IndexOutOfBoundsException
> at java.nio.ByteBuffer.wrap(ByteBuffer.java:352)
>  at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:180)
> at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:197)
>  at
> org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readInts(SerializationUtils.java:450)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readDirectValues(RunLengthIntegerReaderV2.java:239)
>  at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:52)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:287)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringDictionaryTreeReader.next(RecordReaderImpl.java:1046)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringTreeReader.next(RecordReaderImpl.java:884)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1157)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2196)
>  at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:108)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:57)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
> ... 15 more
>
> 2014-03-06 09:52:59,898 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000006_0: Error:
> java.io.IOException: java.io.IOException:
> java.lang.IndexOutOfBoundsException
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>  at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
>  at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
> at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>  at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>  ... 11 more
> Caused by: java.lang.IndexOutOfBoundsException
> at java.nio.ByteBuffer.wrap(ByteBuffer.java:352)
>  at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:180)
> at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:197)
>  at
> org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readInts(SerializationUtils.java:450)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readDirectValues(RunLengthIntegerReaderV2.java:239)
>  at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:52)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:287)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringDictionaryTreeReader.next(RecordReaderImpl.java:1046)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringTreeReader.next(RecordReaderImpl.java:884)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1157)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2196)
>  at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:108)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:57)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
> ... 15 more
>
> 2014-03-06 09:52:59,898 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_0 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2014-03-06 09:52:59,898 INFO [ContainerLauncher #7]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000004 taskAttempt
> attempt_1393951799033_0207_m_000006_0
> 2014-03-06 09:52:59,899 INFO [ContainerLauncher #7]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000006_0
> 2014-03-06 09:52:59,902 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_0 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2014-03-06 09:52:59,902 INFO [CommitterEvent Processor #1]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2014-03-06 09:52:59,902 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:20
> AssignedReds:0 CompletedMaps:9 CompletedReds:0 ContAlloc:20 ContRel:0
> HostLocal:19 RackLocal:0
> 2014-03-06 09:52:59,903 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_0 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2014-03-06 09:52:59,905 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:52:59,905 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_1 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:52:59,908 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000006
> 2014-03-06 09:52:59,908 INFO [IPC Server handler 19 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000001_0
> 2014-03-06 09:52:59,908 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000009
> 2014-03-06 09:52:59,908 INFO [IPC Server handler 19 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000001_0 is : 0.0
> 2014-03-06 09:52:59,908 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000012
> 2014-03-06 09:52:59,908 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000007
> 2014-03-06 09:52:59,908 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000003
> 2014-03-06 09:52:59,909 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000010
> 2014-03-06 09:52:59,909 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000011
> 2014-03-06 09:52:59,909 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000014_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:52:59,909 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000005
> 2014-03-06 09:52:59,909 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:12
> AssignedReds:0 CompletedMaps:9 CompletedReds:0 ContAlloc:20 ContRel:0
> HostLocal:19 RackLocal:0
> 2014-03-06 09:52:59,909 INFO [Thread-48]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on
> node pgnode3
> 2014-03-06 09:52:59,909 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000004_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:52:59,909 INFO [Thread-48]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> attempt_1393951799033_0207_m_000006_1 to list of failed maps
> 2014-03-06 09:52:59,909 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000013_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:52:59,909 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000017_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:52:59,910 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000005_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:52:59,910 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000008_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:52:59,910 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000009_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:52:59,910 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000011_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:52:59,954 INFO [IPC Server handler 20 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000001_0
> 2014-03-06 09:52:59,954 INFO [IPC Server handler 20 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000001_0 is : 1.0
> 2014-03-06 09:52:59,955 INFO [IPC Server handler 21 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000001_0
> 2014-03-06 09:52:59,955 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000001_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:52:59,956 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000008 taskAttempt
> attempt_1393951799033_0207_m_000001_0
> 2014-03-06 09:52:59,956 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000001_0
> 2014-03-06 09:52:59,957 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000001_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:52:59,957 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000001_0
> 2014-03-06 09:52:59,957 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000001 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:52:59,957 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 10
> 2014-03-06 09:52:59,969 INFO [IPC Server handler 22 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000003_0
> 2014-03-06 09:52:59,969 INFO [IPC Server handler 22 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000003_0 is : 0.0
> 2014-03-06 09:53:00,009 INFO [IPC Server handler 23 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000003_0
> 2014-03-06 09:53:00,009 INFO [IPC Server handler 23 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000003_0 is : 1.0
> 2014-03-06 09:53:00,011 INFO [IPC Server handler 24 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000003_0
> 2014-03-06 09:53:00,011 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000003_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:53:00,011 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000002 taskAttempt
> attempt_1393951799033_0207_m_000003_0
> 2014-03-06 09:53:00,011 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000003_0
> 2014-03-06 09:53:00,088 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000003_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:53:00,088 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000003_0
> 2014-03-06 09:53:00,088 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000003 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:53:00,089 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 11
> 2014-03-06 09:53:00,225 INFO [IPC Server handler 25 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000000_0
> 2014-03-06 09:53:00,226 INFO [IPC Server handler 25 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000000_0 is : 0.0
> 2014-03-06 09:53:00,332 INFO [IPC Server handler 26 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000000_0
> 2014-03-06 09:53:00,332 INFO [IPC Server handler 26 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000000_0 is : 1.0
> 2014-03-06 09:53:00,337 INFO [IPC Server handler 27 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000000_0
> 2014-03-06 09:53:00,339 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000000_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:53:00,339 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000014 taskAttempt
> attempt_1393951799033_0207_m_000000_0
> 2014-03-06 09:53:00,339 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000000_0
> 2014-03-06 09:53:00,342 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000000_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:53:00,343 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000000_0
> 2014-03-06 09:53:00,343 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000000 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:53:00,344 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 12
> 2014-03-06 09:53:00,367 INFO [IPC Server handler 28 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000015_0
> 2014-03-06 09:53:00,367 INFO [IPC Server handler 28 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000015_0 is : 0.0
> 2014-03-06 09:53:00,383 INFO [IPC Server handler 29 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000007_0
> 2014-03-06 09:53:00,383 INFO [IPC Server handler 29 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000007_0 is : 0.0
> 2014-03-06 09:53:00,427 INFO [IPC Server handler 0 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000015_0
> 2014-03-06 09:53:00,428 INFO [IPC Server handler 0 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000015_0 is : 1.0
> 2014-03-06 09:53:00,432 INFO [IPC Server handler 1 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000015_0
> 2014-03-06 09:53:00,433 INFO [IPC Server handler 2 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000010_0
> 2014-03-06 09:53:00,433 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000015_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:53:00,433 INFO [IPC Server handler 2 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000010_0 is : 0.0
> 2014-03-06 09:53:00,433 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000019 taskAttempt
> attempt_1393951799033_0207_m_000015_0
> 2014-03-06 09:53:00,434 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000015_0
> 2014-03-06 09:53:00,436 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000015_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:53:00,437 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000015_0
> 2014-03-06 09:53:00,437 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000015 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:53:00,438 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 13
> 2014-03-06 09:53:00,450 INFO [IPC Server handler 3 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000007_0
> 2014-03-06 09:53:00,450 INFO [IPC Server handler 3 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000007_0 is : 1.0
> 2014-03-06 09:53:00,454 INFO [IPC Server handler 4 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000007_0
> 2014-03-06 09:53:00,455 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000007_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:53:00,455 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000016 taskAttempt
> attempt_1393951799033_0207_m_000007_0
> 2014-03-06 09:53:00,456 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000007_0
> 2014-03-06 09:53:00,458 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000007_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:53:00,459 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000007_0
> 2014-03-06 09:53:00,459 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000007 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:53:00,460 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 14
> 2014-03-06 09:53:00,466 INFO [IPC Server handler 5 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000018_0
> 2014-03-06 09:53:00,467 INFO [IPC Server handler 5 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000018_0 is : 0.0
> 2014-03-06 09:53:00,491 INFO [IPC Server handler 6 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000010_0
> 2014-03-06 09:53:00,491 INFO [IPC Server handler 6 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000010_0 is : 1.0
> 2014-03-06 09:53:00,495 INFO [IPC Server handler 7 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000010_0
> 2014-03-06 09:53:00,496 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000010_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:53:00,497 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000017 taskAttempt
> attempt_1393951799033_0207_m_000010_0
> 2014-03-06 09:53:00,497 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000010_0
> 2014-03-06 09:53:00,500 INFO [IPC Server handler 8 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000012_0
> 2014-03-06 09:53:00,500 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000010_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:53:00,501 INFO [IPC Server handler 8 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000012_0 is : 0.0
> 2014-03-06 09:53:00,502 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000010_0
> 2014-03-06 09:53:00,503 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000010 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:53:00,504 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 15
> 2014-03-06 09:53:00,536 INFO [IPC Server handler 9 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000018_0
> 2014-03-06 09:53:00,536 INFO [IPC Server handler 9 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000018_0 is : 1.0
> 2014-03-06 09:53:00,541 INFO [IPC Server handler 10 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000018_0
> 2014-03-06 09:53:00,542 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000018_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:53:00,542 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000020 taskAttempt
> attempt_1393951799033_0207_m_000018_0
> 2014-03-06 09:53:00,542 INFO [ContainerLauncher #6]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000018_0
> 2014-03-06 09:53:00,545 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000018_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:53:00,546 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000018_0
> 2014-03-06 09:53:00,546 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000018 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:53:00,547 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 16
> 2014-03-06 09:53:00,561 INFO [IPC Server handler 11 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000012_0
> 2014-03-06 09:53:00,561 INFO [IPC Server handler 11 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000012_0 is : 1.0
> 2014-03-06 09:53:00,562 INFO [IPC Server handler 12 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000012_0
> 2014-03-06 09:53:00,562 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000012_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:53:00,562 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000018 taskAttempt
> attempt_1393951799033_0207_m_000012_0
> 2014-03-06 09:53:00,562 INFO [ContainerLauncher #1]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000012_0
> 2014-03-06 09:53:00,563 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000012_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:53:00,563 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000012_0
> 2014-03-06 09:53:00,563 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000012 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:53:00,564 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 17
> 2014-03-06 09:53:00,909 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:12
> AssignedReds:0 CompletedMaps:17 CompletedReds:0 ContAlloc:20 ContRel:0
> HostLocal:19 RackLocal:0
> 2014-03-06 09:53:00,913 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1393951799033_0207: ask=1 release= 0 newContainers=0
> finishedContainers=7 resourcelimit=<memory:49152, vCores:-20> knownNMs=3
> 2014-03-06 09:53:00,913 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000013
> 2014-03-06 09:53:00,913 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000004
> 2014-03-06 09:53:00,913 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000016_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:53:00,913 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000014
> 2014-03-06 09:53:00,914 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000019
> 2014-03-06 09:53:00,914 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000006_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:53:00,914 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000016
> 2014-03-06 09:53:00,914 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000000_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:53:00,914 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000017
> 2014-03-06 09:53:00,914 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000015_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:53:00,914 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000020
> 2014-03-06 09:53:00,915 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000007_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:53:00,915 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:5
> AssignedReds:0 CompletedMaps:17 CompletedReds:0 ContAlloc:20 ContRel:0
> HostLocal:19 RackLocal:0
> 2014-03-06 09:53:00,915 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000010_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:53:00,915 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000018_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:53:01,359 INFO [IPC Server handler 13 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000002_0
> 2014-03-06 09:53:01,360 INFO [IPC Server handler 13 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000002_0 is : 0.0
> 2014-03-06 09:53:01,410 INFO [IPC Server handler 14 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000002_0
> 2014-03-06 09:53:01,410 INFO [IPC Server handler 14 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000002_0 is : 1.0
> 2014-03-06 09:53:01,414 INFO [IPC Server handler 15 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000002_0
> 2014-03-06 09:53:01,414 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000002_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:53:01,415 INFO [ContainerLauncher #8]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000015 taskAttempt
> attempt_1393951799033_0207_m_000002_0
> 2014-03-06 09:53:01,415 INFO [ContainerLauncher #8]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000002_0
> 2014-03-06 09:53:01,418 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000002_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:53:01,418 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000002_0
> 2014-03-06 09:53:01,419 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000002 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:53:01,420 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 18
> 2014-03-06 09:53:01,442 INFO [IPC Server handler 16 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000019_0
> 2014-03-06 09:53:01,442 INFO [IPC Server handler 16 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000019_0 is : 0.0
> 2014-03-06 09:53:01,482 INFO [IPC Server handler 17 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000019_0
> 2014-03-06 09:53:01,482 INFO [IPC Server handler 17 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000019_0 is : 1.0
> 2014-03-06 09:53:01,485 INFO [IPC Server handler 18 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from
> attempt_1393951799033_0207_m_000019_0
> 2014-03-06 09:53:01,486 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000019_0 TaskAttempt Transitioned from RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2014-03-06 09:53:01,487 INFO [ContainerLauncher #7]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000021 taskAttempt
> attempt_1393951799033_0207_m_000019_0
> 2014-03-06 09:53:01,487 INFO [ContainerLauncher #7]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000019_0
> 2014-03-06 09:53:01,489 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000019_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2014-03-06 09:53:01,490 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1393951799033_0207_m_000019_0
> 2014-03-06 09:53:01,490 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000019 Task Transitioned from RUNNING to SUCCEEDED
> 2014-03-06 09:53:01,491 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 19
> 2014-03-06 09:53:01,915 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:5
> AssignedReds:0 CompletedMaps:19 CompletedReds:0 ContAlloc:20 ContRel:0
> HostLocal:19 RackLocal:0
> 2014-03-06 09:53:01,919 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000008
> 2014-03-06 09:53:01,920 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000002
> 2014-03-06 09:53:01,920 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000018
> 2014-03-06 09:53:01,920 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000001_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:53:01,920 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000015
> 2014-03-06 09:53:01,920 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000003_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:53:01,920 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2014-03-06 09:53:01,920 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000012_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:53:01,921 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000002_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:53:01,921 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> container Container: [ContainerId: container_1393951799033_0207_01_000022,
> NodeId: pgnode2:45454, NodeHttpAddress: pgnode2:8042, Resource:
> <memory:1536, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> service: 148.251.67.164:45454 }, ] to fast fail map
> 2014-03-06 09:53:01,921 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> earlierFailedMaps
> 2014-03-06 09:53:01,921 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000022 to
> attempt_1393951799033_0207_m_000006_1
> 2014-03-06 09:53:01,921 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:2
> AssignedReds:0 CompletedMaps:19 CompletedReds:0 ContAlloc:21 ContRel:0
> HostLocal:19 RackLocal:0
> 2014-03-06 09:53:01,922 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode2 to /default-rack
> 2014-03-06 09:53:01,923 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_1 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:53:01,923 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000022 taskAttempt
> attempt_1393951799033_0207_m_000006_1
> 2014-03-06 09:53:01,924 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000006_1
> 2014-03-06 09:53:01,929 INFO [ContainerLauncher #9]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000006_1
> : 13562
> 2014-03-06 09:53:01,930 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000006_1] using containerId:
> [container_1393951799033_0207_01_000022 on NM: [pgnode2:45454]
> 2014-03-06 09:53:01,930 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_1 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:53:02,537 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:53:02,544 INFO [IPC Server handler 19 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000022 asked for a task
> 2014-03-06 09:53:02,545 INFO [IPC Server handler 19 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000022 given task:
> attempt_1393951799033_0207_m_000006_1
> 2014-03-06 09:53:02,925 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1393951799033_0207: ask=1 release= 0 newContainers=0
> finishedContainers=1 resourcelimit=<memory:72192, vCores:-5> knownNMs=3
> 2014-03-06 09:53:02,925 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000021
> 2014-03-06 09:53:02,926 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:19 CompletedReds:0 ContAlloc:21 ContRel:0
> HostLocal:19 RackLocal:0
> 2014-03-06 09:53:02,926 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000019_0: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:53:04,385 INFO [IPC Server handler 20 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000006_1
> 2014-03-06 09:53:04,386 INFO [IPC Server handler 20 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000006_1 is : 0.0
> 2014-03-06 09:53:04,389 FATAL [IPC Server handler 21 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task:
> attempt_1393951799033_0207_m_000006_1 - exited : java.io.IOException:
> java.io.IOException: java.lang.IndexOutOfBoundsException
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>  at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
>  at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
> at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>  at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>  ... 11 more
> Caused by: java.lang.IndexOutOfBoundsException
> at java.nio.ByteBuffer.wrap(ByteBuffer.java:352)
>  at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:180)
> at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:197)
>  at
> org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readInts(SerializationUtils.java:450)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readDirectValues(RunLengthIntegerReaderV2.java:239)
>  at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:52)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:287)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringDictionaryTreeReader.next(RecordReaderImpl.java:1046)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringTreeReader.next(RecordReaderImpl.java:884)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1157)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2196)
>  at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:108)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:57)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
> ... 15 more
>
> 2014-03-06 09:53:04,389 INFO [IPC Server handler 21 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from
> attempt_1393951799033_0207_m_000006_1: Error: java.io.IOException:
> java.io.IOException: java.lang.IndexOutOfBoundsException
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>  at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
>  at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
> at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>  at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>  ... 11 more
> Caused by: java.lang.IndexOutOfBoundsException
> at java.nio.ByteBuffer.wrap(ByteBuffer.java:352)
>  at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:180)
> at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:197)
>  at
> org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readInts(SerializationUtils.java:450)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readDirectValues(RunLengthIntegerReaderV2.java:239)
>  at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:52)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:287)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringDictionaryTreeReader.next(RecordReaderImpl.java:1046)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringTreeReader.next(RecordReaderImpl.java:884)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1157)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2196)
>  at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:108)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:57)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
> ... 15 more
>
> 2014-03-06 09:53:04,390 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000006_1: Error:
> java.io.IOException: java.io.IOException:
> java.lang.IndexOutOfBoundsException
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>  at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
>  at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
> at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>  at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>  ... 11 more
> Caused by: java.lang.IndexOutOfBoundsException
> at java.nio.ByteBuffer.wrap(ByteBuffer.java:352)
>  at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:180)
> at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:197)
>  at
> org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readInts(SerializationUtils.java:450)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readDirectValues(RunLengthIntegerReaderV2.java:239)
>  at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:52)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:287)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringDictionaryTreeReader.next(RecordReaderImpl.java:1046)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringTreeReader.next(RecordReaderImpl.java:884)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1157)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2196)
>  at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:108)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:57)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
> ... 15 more
>
> 2014-03-06 09:53:04,391 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_1 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2014-03-06 09:53:04,391 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000022 taskAttempt
> attempt_1393951799033_0207_m_000006_1
> 2014-03-06 09:53:04,392 INFO [ContainerLauncher #4]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000006_1
> 2014-03-06 09:53:04,395 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_1 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2014-03-06 09:53:04,395 INFO [CommitterEvent Processor #2]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2014-03-06 09:53:04,396 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_1 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2014-03-06 09:53:04,397 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:53:04,398 INFO [Thread-48]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on
> node pgnode2
> 2014-03-06 09:53:04,398 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_2 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:53:04,399 INFO [Thread-48]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> attempt_1393951799033_0207_m_000006_2 to list of failed maps
> 2014-03-06 09:53:04,929 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:19 CompletedReds:0 ContAlloc:21 ContRel:0
> HostLocal:19 RackLocal:0
> 2014-03-06 09:53:04,932 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1393951799033_0207: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:72192, vCores:-5> knownNMs=3
> 2014-03-06 09:53:05,935 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000022
> 2014-03-06 09:53:05,936 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2014-03-06 09:53:05,936 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000006_1: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:53:05,936 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> container Container: [ContainerId: container_1393951799033_0207_01_000023,
> NodeId: pgnode3:45454, NodeHttpAddress: pgnode3:8042, Resource:
> <memory:1536, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> service: 148.251.67.165:45454 }, ] to fast fail map
> 2014-03-06 09:53:05,936 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> earlierFailedMaps
> 2014-03-06 09:53:05,937 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000023 to
> attempt_1393951799033_0207_m_000006_2
> 2014-03-06 09:53:05,937 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:19 CompletedReds:0 ContAlloc:22 ContRel:0
> HostLocal:19 RackLocal:0
> 2014-03-06 09:53:05,937 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:53:05,938 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_2 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:53:05,939 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000023 taskAttempt
> attempt_1393951799033_0207_m_000006_2
> 2014-03-06 09:53:05,939 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000006_2
> 2014-03-06 09:53:05,944 INFO [ContainerLauncher #2]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000006_2
> : 13562
> 2014-03-06 09:53:05,945 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000006_2] using containerId:
> [container_1393951799033_0207_01_000023 on NM: [pgnode3:45454]
> 2014-03-06 09:53:05,946 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_2 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:53:06,602 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:53:06,609 INFO [IPC Server handler 22 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000023 asked for a task
> 2014-03-06 09:53:06,609 INFO [IPC Server handler 22 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000023 given task:
> attempt_1393951799033_0207_m_000006_2
> 2014-03-06 09:53:06,939 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1393951799033_0207: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:76800, vCores:-2> knownNMs=3
> 2014-03-06 09:53:08,301 INFO [IPC Server handler 23 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000006_2
> 2014-03-06 09:53:08,301 INFO [IPC Server handler 23 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000006_2 is : 0.0
> 2014-03-06 09:53:08,302 FATAL [IPC Server handler 24 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task:
> attempt_1393951799033_0207_m_000006_2 - exited : java.io.IOException:
> java.io.IOException: java.lang.IndexOutOfBoundsException
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>  at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
>  at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
> at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>  at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>  ... 11 more
> Caused by: java.lang.IndexOutOfBoundsException
> at java.nio.ByteBuffer.wrap(ByteBuffer.java:352)
>  at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:180)
> at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:197)
>  at
> org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readInts(SerializationUtils.java:450)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readDirectValues(RunLengthIntegerReaderV2.java:239)
>  at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:52)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:287)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringDictionaryTreeReader.next(RecordReaderImpl.java:1046)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringTreeReader.next(RecordReaderImpl.java:884)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1157)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2196)
>  at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:108)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:57)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
> ... 15 more
>
> 2014-03-06 09:53:08,302 INFO [IPC Server handler 24 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from
> attempt_1393951799033_0207_m_000006_2: Error: java.io.IOException:
> java.io.IOException: java.lang.IndexOutOfBoundsException
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>  at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
>  at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
> at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>  at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>  ... 11 more
> Caused by: java.lang.IndexOutOfBoundsException
> at java.nio.ByteBuffer.wrap(ByteBuffer.java:352)
>  at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:180)
> at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:197)
>  at
> org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readInts(SerializationUtils.java:450)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readDirectValues(RunLengthIntegerReaderV2.java:239)
>  at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:52)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:287)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringDictionaryTreeReader.next(RecordReaderImpl.java:1046)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringTreeReader.next(RecordReaderImpl.java:884)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1157)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2196)
>  at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:108)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:57)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
> ... 15 more
>
> 2014-03-06 09:53:08,302 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000006_2: Error:
> java.io.IOException: java.io.IOException:
> java.lang.IndexOutOfBoundsException
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>  at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
>  at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
> at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>  at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>  ... 11 more
> Caused by: java.lang.IndexOutOfBoundsException
> at java.nio.ByteBuffer.wrap(ByteBuffer.java:352)
>  at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:180)
> at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:197)
>  at
> org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readInts(SerializationUtils.java:450)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readDirectValues(RunLengthIntegerReaderV2.java:239)
>  at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:52)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:287)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringDictionaryTreeReader.next(RecordReaderImpl.java:1046)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringTreeReader.next(RecordReaderImpl.java:884)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1157)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2196)
>  at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:108)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:57)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
> ... 15 more
>
> 2014-03-06 09:53:08,303 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_2 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2014-03-06 09:53:08,303 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000023 taskAttempt
> attempt_1393951799033_0207_m_000006_2
> 2014-03-06 09:53:08,303 INFO [ContainerLauncher #0]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000006_2
> 2014-03-06 09:53:08,304 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_2 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2014-03-06 09:53:08,304 INFO [CommitterEvent Processor #3]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2014-03-06 09:53:08,305 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_2 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2014-03-06 09:53:08,305 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:53:08,305 INFO [Thread-48]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on
> node pgnode3
> 2014-03-06 09:53:08,305 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_3 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2014-03-06 09:53:08,305 INFO [Thread-48]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added
> attempt_1393951799033_0207_m_000006_3 to list of failed maps
> 2014-03-06 09:53:08,941 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
> Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:19 CompletedReds:0 ContAlloc:22 ContRel:0
> HostLocal:19 RackLocal:0
> 2014-03-06 09:53:08,946 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1393951799033_0207: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:76800, vCores:-2> knownNMs=3
> 2014-03-06 09:53:09,949 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received
> completed container container_1393951799033_0207_01_000023
> 2014-03-06 09:53:09,949 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
> containers 1
> 2014-03-06 09:53:09,949 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000006_2: Container killed by the
> ApplicationMaster.
> Container killed on request. Exit code is 143
>
> 2014-03-06 09:53:09,949 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning
> container Container: [ContainerId: container_1393951799033_0207_01_000024,
> NodeId: pgnode3:45454, NodeHttpAddress: pgnode3:8042, Resource:
> <memory:1536, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken,
> service: 148.251.67.165:45454 }, ] to fast fail map
> 2014-03-06 09:53:09,949 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from
> earlierFailedMaps
> 2014-03-06 09:53:09,949 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
> container container_1393951799033_0207_01_000024 to
> attempt_1393951799033_0207_m_000006_3
> 2014-03-06 09:53:09,949 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
> Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1
> AssignedReds:0 CompletedMaps:19 CompletedReds:0 ContAlloc:23 ContRel:0
> HostLocal:19 RackLocal:0
> 2014-03-06 09:53:09,949 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved pgnode3 to /default-rack
> 2014-03-06 09:53:09,949 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_3 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2014-03-06 09:53:09,950 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1393951799033_0207_01_000024 taskAttempt
> attempt_1393951799033_0207_m_000006_3
> 2014-03-06 09:53:09,950 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Launching attempt_1393951799033_0207_m_000006_3
> 2014-03-06 09:53:09,952 INFO [ContainerLauncher #5]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle
> port returned by ContainerManager for attempt_1393951799033_0207_m_000006_3
> : 13562
> 2014-03-06 09:53:09,953 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1393951799033_0207_m_000006_3] using containerId:
> [container_1393951799033_0207_01_000024 on NM: [pgnode3:45454]
> 2014-03-06 09:53:09,954 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_3 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
>  2014-03-06 09:53:10,563 INFO [Socket Reader #1 for port 57703]
> SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
> job_1393951799033_0207 (auth:SIMPLE)
> 2014-03-06 09:53:10,570 INFO [IPC Server handler 25 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID :
> jvm_1393951799033_0207_m_000024 asked for a task
> 2014-03-06 09:53:10,570 INFO [IPC Server handler 25 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID:
> jvm_1393951799033_0207_m_000024 given task:
> attempt_1393951799033_0207_m_000006_3
> 2014-03-06 09:53:10,952 INFO [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
> for application_1393951799033_0207: ask=1 release= 0 newContainers=0
> finishedContainers=0 resourcelimit=<memory:78336, vCores:-1> knownNMs=3
> 2014-03-06 09:53:12,343 INFO [IPC Server handler 26 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Status update from
> attempt_1393951799033_0207_m_000006_3
> 2014-03-06 09:53:12,343 INFO [IPC Server handler 26 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt
> attempt_1393951799033_0207_m_000006_3 is : 0.0
> 2014-03-06 09:53:12,344 FATAL [IPC Server handler 27 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task:
> attempt_1393951799033_0207_m_000006_3 - exited : java.io.IOException:
> java.io.IOException: java.lang.IndexOutOfBoundsException
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>  at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
>  at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
> at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>  at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>  ... 11 more
> Caused by: java.lang.IndexOutOfBoundsException
> at java.nio.ByteBuffer.wrap(ByteBuffer.java:352)
>  at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:180)
> at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:197)
>  at
> org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readInts(SerializationUtils.java:450)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readDirectValues(RunLengthIntegerReaderV2.java:239)
>  at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:52)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:287)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringDictionaryTreeReader.next(RecordReaderImpl.java:1046)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringTreeReader.next(RecordReaderImpl.java:884)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1157)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2196)
>  at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:108)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:57)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
> ... 15 more
>
> 2014-03-06 09:53:12,344 INFO [IPC Server handler 27 on 57703]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from
> attempt_1393951799033_0207_m_000006_3: Error: java.io.IOException:
> java.io.IOException: java.lang.IndexOutOfBoundsException
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>  at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
>  at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
> at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>  at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>  ... 11 more
> Caused by: java.lang.IndexOutOfBoundsException
> at java.nio.ByteBuffer.wrap(ByteBuffer.java:352)
>  at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:180)
> at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:197)
>  at
> org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readInts(SerializationUtils.java:450)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readDirectValues(RunLengthIntegerReaderV2.java:239)
>  at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:52)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:287)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringDictionaryTreeReader.next(RecordReaderImpl.java:1046)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringTreeReader.next(RecordReaderImpl.java:884)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1157)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2196)
>  at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:108)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:57)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
> ... 15 more
>
> 2014-03-06 09:53:12,345 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1393951799033_0207_m_000006_3: Error:
> java.io.IOException: java.io.IOException:
> java.lang.IndexOutOfBoundsException
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>  at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
>  at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:197)
> at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:183)
>  at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>  at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
>  at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException
> at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>  at
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>  at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
> at
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>  ... 11 more
> Caused by: java.lang.IndexOutOfBoundsException
> at java.nio.ByteBuffer.wrap(ByteBuffer.java:352)
>  at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:180)
> at
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:197)
>  at
> org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readInts(SerializationUtils.java:450)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readDirectValues(RunLengthIntegerReaderV2.java:239)
>  at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.readValues(RunLengthIntegerReaderV2.java:52)
> at
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReaderV2.next(RunLengthIntegerReaderV2.java:287)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringDictionaryTreeReader.next(RecordReaderImpl.java:1046)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StringTreeReader.next(RecordReaderImpl.java:884)
>  at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:1157)
> at
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:2196)
>  at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:108)
> at
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:57)
>  at
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
> ... 15 more
>
> 2014-03-06 09:53:12,345 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_3 TaskAttempt Transitioned from RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2014-03-06 09:53:12,345 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1393951799033_0207_01_000024 taskAttempt
> attempt_1393951799033_0207_m_000006_3
> 2014-03-06 09:53:12,345 INFO [ContainerLauncher #3]
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING
> attempt_1393951799033_0207_m_000006_3
> 2014-03-06 09:53:12,346 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_3 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2014-03-06 09:53:12,347 INFO [CommitterEvent Processor #4]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: TASK_ABORT
> 2014-03-06 09:53:12,347 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1393951799033_0207_m_000006_3 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2014-03-06 09:53:12,348 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1393951799033_0207_m_000006 Task Transitioned from RUNNING to FAILED
> 2014-03-06 09:53:12,348 INFO [Thread-48]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on
> node pgnode3
> 2014-03-06 09:53:12,348 INFO [Thread-48]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted
> host pgnode3
> 2014-03-06 09:53:12,348 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 20
> 2014-03-06 09:53:12,348 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
> failed. failedMaps:1 failedReduces:0
> 2014-03-06 09:53:12,349 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1393951799033_0207Job Transitioned from RUNNING to FAIL_ABORT
> 2014-03-06 09:53:12,349 INFO [CommitterEvent Processor #0]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing
> the event EventType: JOB_ABORT
> 2014-03-06 09:53:12,352 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1393951799033_0207Job Transitioned from FAIL_ABORT to FAILED
> 2014-03-06 09:53:12,352 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so
> this is the last retry
> 2014-03-06 09:53:12,352 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
> isAMLastRetry: true
> 2014-03-06 09:53:12,352 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator
> notified that shouldUnregistered is: true
> 2014-03-06 09:53:12,352 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry:
> true
> 2014-03-06 09:53:12,352 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
> JobHistoryEventHandler notified that forceJobCompletion is true
> 2014-03-06 09:53:12,352 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the
> services
> 2014-03-06 09:53:12,353 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping
> JobHistoryEventHandler. Size of the outstanding queue size is 1
> 2014-03-06 09:53:12,353 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: In stop,
> writing event JOB_FAILED
> 2014-03-06 09:53:12,450 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> hdfs://pgmaster:8020/user/hue/.staging/job_1393951799033_0207/job_1393951799033_0207_1.jhist
> to
> hdfs://pgmaster:8020/mr-history/tmp/hue/job_1393951799033_0207-1394095970666-hue-select+mm_uuid+from+publigroupe.denorm+...13%28Stage-1394095992348-19-0-FAILED-default.jhist_tmp
> 2014-03-06 09:53:12,513 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> done location:
> hdfs://pgmaster:8020/mr-history/tmp/hue/job_1393951799033_0207-1394095970666-hue-select+mm_uuid+from+publigroupe.denorm+...13%28Stage-1394095992348-19-0-FAILED-default.jhist_tmp
> 2014-03-06 09:53:12,531 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> hdfs://pgmaster:8020/user/hue/.staging/job_1393951799033_0207/job_1393951799033_0207_1_conf.xml
> to
> hdfs://pgmaster:8020/mr-history/tmp/hue/job_1393951799033_0207_conf.xml_tmp
> 2014-03-06 09:53:12,596 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> done location:
> hdfs://pgmaster:8020/mr-history/tmp/hue/job_1393951799033_0207_conf.xml_tmp
> 2014-03-06 09:53:12,633 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
> done:
> hdfs://pgmaster:8020/mr-history/tmp/hue/job_1393951799033_0207.summary_tmp
> to hdfs://pgmaster:8020/mr-history/tmp/hue/job_1393951799033_0207.summary
> 2014-03-06 09:53:12,647 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
> done:
> hdfs://pgmaster:8020/mr-history/tmp/hue/job_1393951799033_0207_conf.xml_tmp
> to hdfs://pgmaster:8020/mr-history/tmp/hue/job_1393951799033_0207_conf.xml
> 2014-03-06 09:53:12,662 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to
> done:
> hdfs://pgmaster:8020/mr-history/tmp/hue/job_1393951799033_0207-1394095970666-hue-select+mm_uuid+from+publigroupe.denorm+...13%28Stage-1394095992348-19-0-FAILED-default.jhist_tmp
> to
> hdfs://pgmaster:8020/mr-history/tmp/hue/job_1393951799033_0207-1394095970666-hue-select+mm_uuid+from+publigroupe.denorm+...13%28Stage-1394095992348-19-0-FAILED-default.jhist
> 2014-03-06 09:53:12,663 INFO [Thread-61]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped
> JobHistoryEventHandler. super.stop()
> 2014-03-06 09:53:12,665 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job
> diagnostics to Task failed task_1393951799033_0207_m_000006
> Job failed as tasks failed. failedMaps:1 failedReduces:0
>
> 2014-03-06 09:53:12,665 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is
> http://pgmaster:19888/jobhistory/job/job_1393951799033_0207
> 2014-03-06 09:53:12,670 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for
> application to be successfully unregistered.
> 2014-03-06 09:53:13,672 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats:
> PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0
> CompletedMaps:19 CompletedReds:0 ContAlloc:23 ContRel:0 HostLocal:19
> RackLocal:0
> 2014-03-06 09:53:13,676 INFO [Thread-61]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory
> hdfs://pgmaster:8020 /user/hue/.staging/job_1393951799033_0207
> 2014-03-06 09:53:13,708 INFO [Thread-61] org.apache.hadoop.ipc.Server:
> Stopping server on 57703
> 2014-03-06 09:53:13,709 INFO [IPC Server listener on 57703]
> org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 57703
> 2014-03-06 09:53:13,709 INFO [TaskHeartbeatHandler PingChecker]
> org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
> TaskHeartbeatHandler thread interrupted
> 2014-03-06 09:53:13,709 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
> 2014-03-06 09:53:14,362 INFO [Thread-1]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster received a
> signal. Signaling RMCommunicator and JobHistoryEventHandler.
> 2014-03-06 09:53:14,362 INFO [Thread-1]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator
> notified that iSignalled is: true
> 2014-03-06 09:53:14,362 INFO [Thread-1]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator
> isAMLastRetry: true
> 2014-03-06 09:53:14,362 INFO [Thread-1]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator
> notified that shouldUnregistered is: true
> 2014-03-06 09:53:14,362 INFO [Thread-1]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry:
> true
> 2014-03-06 09:53:14,362 INFO [Thread-1]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
> JobHistoryEventHandler notified that forceJobCompletion is true
>
>
> --
> Philippe Kernévez
>
>
>
> Directeur technique (Suisse),
> pkernevez@octo.com
> +41 79 888 33 32
>
> Retrouvez OCTO sur OCTO Talk : http://blog.octo.com
> OCTO Technology http://www.octo.com
>

Mime
View raw message