kylin-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Li Yang <liy...@apache.org>
Subject Re: 我在跑kylin的demo的时候遇到了问题,恳请您帮助一下是否遇见过这样的错误
Date Mon, 20 Feb 2017 07:18:03 GMT
> Error running local (uberized) 'child' : java.lang.NoSuchMethodError:
> org.apache.hadoop.hbase.util.ChecksumType.
getChecksumObject()Ljava/util/zip/Checksum;
>     at org.apache.hadoop.hbase.io.hfile.ChecksumUtil.
generateChecksums(ChecksumUtil.java:73)
>     at org.apache.hadoop.hbase.io.
hfile.HFileBlock$Writer.finishBlock(HFileBlock.java:943)

Seems there is mismatch of HBase libs. Could there be multiple versions of
HBase in your env?

> 三台 VMware上的Linux虚机,每台120G(非预先分配),内存1G。

So only 3GB memory in total? That's very limited. The smallest sandbox I
tried is 10 GB memory.
I don't think 3GB is enough to run Hadoop + Kylin.

Yang




On Thu, Feb 16, 2017 at 12:00 PM, Luke Han <lukehan@apache.org> wrote:

> Forward to user mailing list for further support.
>
> Thanks.
>
>
> ---------- Forwarded message ----------
> From: ⒐o、cc <c-island@hotmail.com>
> Date: Tue, Feb 14, 2017 at 7:32 PM
> Subject: 我在跑kylin的demo的时候遇到了问题,恳请您帮助一下是否遇见过这样的错误
> To: lukehan <lukehan@apache.org>
>
>
> 一、实验环境
> 由于Apache kylin用到了hive及hbase,那么hadoop集群是必须的了,所以先要保证hadoop集群稳定正常,
> 然后确保hive和hbase安装了切稳定正常。
>
> 测试环境如下:
>
> 三台 VMware上的Linux虚机,每台120G(非预先分配),内存1G。
> IP与主机名:
> 192.168.65.61 kylin1
> 192.168.65.62 kylin2
> 192.168.65.63 kylin3
>
> 主机规划:
> 192.168.65.61做master,运行NameNode和ResourceManager进程。这三台主机都做slave,
> 运行DataNode和NodeManager进程。
> 操作系统:CentOS release 6.5
> java版本:"1.8.0_121"
> hadoop版本:hadoop-2.7.3
> zookeeper版本:zookeeper-3.4.9
> hbase版本:hbase-1.2.4
> hive版本:hive-2.1.1
> kylin版本:apache-kylin-1.6.0-hbase1.x-bin.tar.gz
>
> 【问题】:
> 每次跑kylin自带的demo的时候都会在#17 Step Name: Convert Cuboid Data to HFile这一个步骤报错
>
> 日志信息为:
> Log Type: syslog
> Log Upload Time: 8-二月-2017 16:34:36
> Log Length: 93072
> 2017-02-08 16:34:14,664 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Created MRAppMaster for application appattempt_1486537920487_0014_000001
> 2017-02-08 16:34:15,109 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Executing with tokens:
> 2017-02-08 16:34:15,272 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id {
> id: 14 cluster_timestamp: 1486537920487 } attemptId: 1 } keyId: 909018579)
> 2017-02-08 16:34:15,293 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Using mapred newApiCommitter.
> 2017-02-08 16:34:16,191 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> OutputCommitter set in config null
> 2017-02-08 16:34:16,268 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> OutputCommitter is org.apache.hadoop.mapreduce.
> lib.output.FileOutputCommitter
> 2017-02-08 16:34:16,302 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for
> class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> 2017-02-08 16:34:16,303 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class org.apache.hadoop.mapreduce.
> v2.app.job.event.JobEventType
> for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$
> JobEventDispatcher
> 2017-02-08 16:34:16,304 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class org.apache.hadoop.mapreduce.
> v2.app.job.event.TaskEventType
> for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$
> TaskEventDispatcher
> 2017-02-08 16:34:16,306 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType
> for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$
> TaskAttemptEventDispatcher
> 2017-02-08 16:34:16,306 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class org.apache.hadoop.mapreduce.v2.app.commit.
> CommitterEventType
> for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
> 2017-02-08 16:34:16,315 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class
> org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType
> for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$
> SpeculatorEventDispatcher
> 2017-02-08 16:34:16,316 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class
> org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType
> for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$
> ContainerAllocatorRouter
> 2017-02-08 16:34:16,317 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType
> for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$
> ContainerLauncherRouter
> 2017-02-08 16:34:16,437 INFO [main]
> org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils:
> Default file system [hdfs://cluster:8020]
> 2017-02-08 16:34:16,472 INFO [main]
> org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils:
> Default file system [hdfs://cluster:8020]
> 2017-02-08 16:34:16,510 INFO [main]
> org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils:
> Default file system [hdfs://cluster:8020]
> 2017-02-08 16:34:16,531 INFO [main]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
> Emitting job history data to the timeline server is not enabled
> 2017-02-08 16:34:16,591 INFO [main]
> org.apache.hadoop.yarn.event.AsyncDispatcher:
> Registering class
> org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type
> for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$
> JobFinishEventHandler
> 2017-02-08 16:34:16,882 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsConfig:
> loaded properties from hadoop-metrics2.properties
> 2017-02-08 16:34:16,983 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl:
> Scheduled snapshot period at 10 second(s).
> 2017-02-08 16:34:16,983 INFO [main]
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl:
> MRAppMaster metrics system started
> 2017-02-08 16:34:16,995 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> Adding job token for job_1486537920487_0014 to jobTokenSecretManager
> 2017-02-08 16:34:17,157 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> Uberizing job job_1486537920487_0014: 9m+1r tasks (19224898 input bytes)
> will run sequentially on single node.
> 2017-02-08 16:34:17,183 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> Input size for job job_1486537920487_0014 = 19224898. Number of splits = 9
> 2017-02-08 16:34:17,185 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> Number of reduces for job job_1486537920487_0014 = 1
> 2017-02-08 16:34:17,185 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1486537920487_0014Job Transitioned from NEW to INITED
> 2017-02-08 16:34:17,186 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> MRAppMaster uberizing job job_1486537920487_0014 in local container
> ("uber-AM") on node hdslave1:58379.
> 2017-02-08 16:34:17,236 INFO [main] org.apache.hadoop.ipc.
> CallQueueManager:
> Using callQueue class java.util.concurrent.LinkedBlockingQueue
> 2017-02-08 16:34:17,251 INFO [Socket Reader #1 for port 40737]
> org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 40737
> 2017-02-08 16:34:17,278 INFO [main] org.apache.hadoop.yarn.
> factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol
> org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
> 2017-02-08 16:34:17,279 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2017-02-08 16:34:17,279 INFO [IPC Server listener on 40737]
> org.apache.hadoop.ipc.Server: IPC Server listener on 40737: starting
> 2017-02-08 16:34:17,281 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.client.MRClientService:
> Instantiated MRClientService at hdslave1/172.20.4.210:40737
> 2017-02-08 16:34:17,376 INFO [main] org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2017-02-08 16:34:17,382 WARN [main] org.apache.hadoop.http.HttpRequestLog:
> Jetty request log can only be enabled using Log4j
> 2017-02-08 16:34:17,397 INFO [main] org.apache.hadoop.http.HttpServer2:
> Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$
> QuotingInputFilter)
> 2017-02-08 16:34:17,446 INFO [main] org.apache.hadoop.http.HttpServer2:
> Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.
> server.webproxy.amfilter.AmIpFilter) to context mapreduce
> 2017-02-08 16:34:17,446 INFO [main] org.apache.hadoop.http.HttpServer2:
> Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.
> server.webproxy.amfilter.AmIpFilter) to context static
> 2017-02-08 16:34:17,451 INFO [main] org.apache.hadoop.http.HttpServer2:
> adding path spec: /mapreduce/*
> 2017-02-08 16:34:17,451 INFO [main] org.apache.hadoop.http.HttpServer2:
> adding path spec: /ws/*
> 2017-02-08 16:34:17,464 INFO [main] org.apache.hadoop.http.HttpServer2:
> Jetty bound to port 40435
> 2017-02-08 16:34:17,464 INFO [main] org.mortbay.log: jetty-6.1.26
> 2017-02-08 16:34:17,509 INFO [main] org.mortbay.log: Extract
> jar:file:/server/colony/app/hadoop-2.6.4/share/hadoop/
> yarn/hadoop-yarn-common-2.6.4.jar!/webapps/mapreduce to
> /tmp/Jetty_0_0_0_0_40435_mapreduce____.ocw6g1/webapp
> 2017-02-08 16:34:17,877 INFO [main] org.mortbay.log: Started HttpServer2$
> SelectChannelConnectorWithSafeStartup@0.0.0.0:40435
> 2017-02-08 16:34:17,878 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
> Web app /mapreduce started at 40435
> 2017-02-08 16:34:18,391 INFO [main] org.apache.hadoop.yarn.webapp.WebApps:
> Registered webapp guice modules
> 2017-02-08 16:34:18,399 INFO [main] org.apache.hadoop.ipc.
> CallQueueManager:
> Using callQueue class java.util.concurrent.LinkedBlockingQueue
> 2017-02-08 16:34:18,400 INFO [Socket Reader #1 for port 52930]
> org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 52930
> 2017-02-08 16:34:18,407 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2017-02-08 16:34:18,408 INFO [IPC Server listener on 52930]
> org.apache.hadoop.ipc.Server: IPC Server listener on 52930: starting
> 2017-02-08 16:34:18,590 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> maxContainerCapability: <memory:8192, vCores:32>
> 2017-02-08 16:34:18,590 INFO [main]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
> queue: default
> 2017-02-08 16:34:18,646 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1486537920487_0014Job Transitioned from INITED to SETUP
> 2017-02-08 16:34:18,649 INFO [CommitterEvent Processor #0]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> the event EventType: JOB_SETUP
> 2017-02-08 16:34:18,692 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1486537920487_0014Job Transitioned from SETUP to RUNNING
> 2017-02-08 16:34:18,722 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave1 to
> /default-rack
> 2017-02-08 16:34:18,723 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave2 to
> /default-rack
> 2017-02-08 16:34:18,727 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000000 Task Transitioned from NEW to SCHEDULED
> 2017-02-08 16:34:18,728 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdmaster to
> /default-rack
> 2017-02-08 16:34:18,728 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave3 to
> /default-rack
> 2017-02-08 16:34:18,728 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000001 Task Transitioned from NEW to SCHEDULED
> 2017-02-08 16:34:18,728 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave1 to
> /default-rack
> 2017-02-08 16:34:18,728 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdmaster to
> /default-rack
> 2017-02-08 16:34:18,729 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000002 Task Transitioned from NEW to SCHEDULED
> 2017-02-08 16:34:18,729 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave1 to
> /default-rack
> 2017-02-08 16:34:18,729 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdmaster to
> /default-rack
> 2017-02-08 16:34:18,729 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000003 Task Transitioned from NEW to SCHEDULED
> 2017-02-08 16:34:18,730 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdmaster to
> /default-rack
> 2017-02-08 16:34:18,730 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave3 to
> /default-rack
> 2017-02-08 16:34:18,730 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000004 Task Transitioned from NEW to SCHEDULED
> 2017-02-08 16:34:18,730 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave1 to
> /default-rack
> 2017-02-08 16:34:18,730 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdmaster to
> /default-rack
> 2017-02-08 16:34:18,731 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000005 Task Transitioned from NEW to SCHEDULED
> 2017-02-08 16:34:18,731 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave2 to
> /default-rack
> 2017-02-08 16:34:18,731 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdmaster to
> /default-rack
> 2017-02-08 16:34:18,731 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000006 Task Transitioned from NEW to SCHEDULED
> 2017-02-08 16:34:18,731 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave1 to
> /default-rack
> 2017-02-08 16:34:18,732 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdmaster to
> /default-rack
> 2017-02-08 16:34:18,732 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000007 Task Transitioned from NEW to SCHEDULED
> 2017-02-08 16:34:18,733 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave2 to
> /default-rack
> 2017-02-08 16:34:18,733 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdmaster to
> /default-rack
> 2017-02-08 16:34:18,734 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000008 Task Transitioned from NEW to SCHEDULED
> 2017-02-08 16:34:18,734 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_r_000000 Task Transitioned from NEW to SCHEDULED
> 2017-02-08 16:34:18,737 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000000_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2017-02-08 16:34:18,737 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000001_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2017-02-08 16:34:18,738 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000002_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2017-02-08 16:34:18,738 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000003_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2017-02-08 16:34:18,738 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000004_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2017-02-08 16:34:18,738 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000005_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2017-02-08 16:34:18,739 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000006_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2017-02-08 16:34:18,739 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000007_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2017-02-08 16:34:18,739 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000008_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2017-02-08 16:34:18,739 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_r_000000_0 TaskAttempt Transitioned from NEW to
> UNASSIGNED
> 2017-02-08 16:34:18,740 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.local.LocalContainerAllocator:
> Processing the event EventType: CONTAINER_REQ
> 2017-02-08 16:34:18,756 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.local.LocalContainerAllocator:
> Processing the event EventType: CONTAINER_REQ
> 2017-02-08 16:34:18,756 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.local.LocalContainerAllocator:
> Processing the event EventType: CONTAINER_REQ
> 2017-02-08 16:34:18,756 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.local.LocalContainerAllocator:
> Processing the event EventType: CONTAINER_REQ
> 2017-02-08 16:34:18,756 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.local.LocalContainerAllocator:
> Processing the event EventType: CONTAINER_REQ
> 2017-02-08 16:34:18,756 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.local.LocalContainerAllocator:
> Processing the event EventType: CONTAINER_REQ
> 2017-02-08 16:34:18,757 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.local.LocalContainerAllocator:
> Processing the event EventType: CONTAINER_REQ
> 2017-02-08 16:34:18,757 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.local.LocalContainerAllocator:
> Processing the event EventType: CONTAINER_REQ
> 2017-02-08 16:34:18,757 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.local.LocalContainerAllocator:
> Processing the event EventType: CONTAINER_REQ
> 2017-02-08 16:34:18,757 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.local.LocalContainerAllocator:
> Processing the event EventType: CONTAINER_REQ
> 2017-02-08 16:34:18,796 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event
> Writer
> setup for JobId: job_1486537920487_0014, File: hdfs://cluster:8020/tmp/
> hadoop-yarn/staging/root/.staging/job_1486537920487_
> 0014/job_1486537920487_0014_1.jhist
> 2017-02-08 16:34:18,829 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave1 to
> /default-rack
> 2017-02-08 16:34:18,852 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar
> file on the remote FS is hdfs://cluster/tmp/hadoop-
> yarn/staging/root/.staging/job_1486537920487_0014/job.jar
> 2017-02-08 16:34:18,857 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf
> file on the remote FS is /tmp/hadoop-yarn/staging/root/
> .staging/job_1486537920487_0014/job.xml
> 2017-02-08 16:34:18,899 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0
> tokens and #1 secret keys for NM use for launching container
> 2017-02-08 16:34:18,900 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of
> containertokens_dob is 1
> 2017-02-08 16:34:18,900 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting
> shuffle token in serviceData
> 2017-02-08 16:34:19,218 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000000_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2017-02-08 16:34:19,220 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave1 to
> /default-rack
> 2017-02-08 16:34:19,221 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000001_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2017-02-08 16:34:19,222 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave1 to
> /default-rack
> 2017-02-08 16:34:19,223 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000002_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2017-02-08 16:34:19,223 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave1 to
> /default-rack
> 2017-02-08 16:34:19,224 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000003_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2017-02-08 16:34:19,225 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave1 to
> /default-rack
> 2017-02-08 16:34:19,226 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000004_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2017-02-08 16:34:19,226 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave1 to
> /default-rack
> 2017-02-08 16:34:19,227 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000005_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2017-02-08 16:34:19,228 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave1 to
> /default-rack
> 2017-02-08 16:34:19,229 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000006_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2017-02-08 16:34:19,229 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave1 to
> /default-rack
> 2017-02-08 16:34:19,230 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000007_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2017-02-08 16:34:19,230 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave1 to
> /default-rack
> 2017-02-08 16:34:19,231 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000008_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2017-02-08 16:34:19,251 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.yarn.util.RackResolver: Resolved hdslave1 to
> /default-rack
> 2017-02-08 16:34:19,252 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_r_000000_0 TaskAttempt Transitioned from
> UNASSIGNED to ASSIGNED
> 2017-02-08 16:34:19,254 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000000_0
> 2017-02-08 16:34:19,256 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000001_0
> 2017-02-08 16:34:19,257 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000002_0
> 2017-02-08 16:34:19,258 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000003_0
> 2017-02-08 16:34:19,259 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000004_0
> 2017-02-08 16:34:19,261 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000005_0
> 2017-02-08 16:34:19,264 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000006_0
> 2017-02-08 16:34:19,263 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> mapreduce.cluster.local.dir for uber task: /data/ha/hadoop/data/tmp/nm-
> local-dir/usercache/root/appcache/application_1486537920487_0014
> 2017-02-08 16:34:19,266 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1486537920487_0014_m_000000_0] using containerId:
> [container_1486537920487_0014_01_000001 on NM: [hdslave1:58379]
> 2017-02-08 16:34:19,266 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000007_0
> 2017-02-08 16:34:19,267 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000008_0
> 2017-02-08 16:34:19,268 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_r_000000_0
> 2017-02-08 16:34:19,275 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000000_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
> 2017-02-08 16:34:19,276 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000000 Task Transitioned from SCHEDULED to
> RUNNING
> 2017-02-08 16:34:19,303 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
> 2017-02-08 16:34:19,383 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Processing split: hdfs://cluster/kylin/kylin_metadata/kylin-983f0299-6dfc-
> 4100-bfc7-c048be622ef3/kylin_sales_cube/cuboid/5d_cuboid/
> part-r-00000:0+5062193
> 2017-02-08 16:34:19,440 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> (EQUATOR) 0 kvi 26214396(104857584)
> 2017-02-08 16:34:19,440 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> mapreduce.task.io.sort.mb: 100
> 2017-02-08 16:34:19,440 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> soft limit at 83886080
> 2017-02-08 16:34:19,440 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufvoid = 104857600
> 2017-02-08 16:34:19,440 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396; length = 6553600
> 2017-02-08 16:34:19,480 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Map output collector class = org.apache.hadoop.mapred.
> MapTask$MapOutputBuffer
> 2017-02-08 16:34:19,511 INFO [uber-SubtaskRunner]
> org.apache.kylin.engine.mr
> .KylinMapper: The conf for current mapper will be 1083236408
> <010%208323%206408>
> 2017-02-08 16:34:19,518 INFO [uber-SubtaskRunner]
> org.apache.kylin.engine.mr
> .common.AbstractHadoopJob: The absolute path for meta dir is
> /home/data/ha/hadoop/data/tmp/nm-local-dir/usercache/root/
> appcache/application_1486537920487_0014/container_
> 1486537920487_0014_01_000001/meta
> 2017-02-08 16:34:19,524 INFO [uber-SubtaskRunner]
> org.apache.kylin.common.KylinConfig:
> New KylinConfig 437652820
> 2017-02-08 16:34:19,524 INFO [uber-SubtaskRunner]
> org.apache.kylin.common.KylinConfig:
> Use KYLIN_CONF=/home/data/ha/hadoop/data/tmp/nm-local-dir/
> usercache/root/appcache/application_1486537920487_
> 0014/container_1486537920487_0014_01_000001/meta
> 2017-02-08 16:34:19,530 INFO [uber-SubtaskRunner]
> org.apache.kylin.common.KylinConfig:
> Initialized a new KylinConfig from getInstanceFromEnv : 437652820
> 2017-02-08 16:34:19,530 INFO [uber-SubtaskRunner]
> org.apache.kylin.common.KylinConfigBase:
> Kylin Config was updated with kylin.metadata.url :
> /home/data/ha/hadoop/data/tmp/nm-local-dir/usercache/root/
> appcache/application_1486537920487_0014/container_
> 1486537920487_0014_01_000001/meta
> 2017-02-08 16:34:19,550 INFO [uber-SubtaskRunner]
> org.apache.kylin.cube.CubeManager:
> Initializing CubeManager with config /home/data/ha/hadoop/data/tmp/
> nm-local-dir/usercache/root/appcache/application_
> 1486537920487_0014/container_1486537920487_0014_01_000001/meta
> 2017-02-08 16:34:19,554 INFO [uber-SubtaskRunner]
> org.apache.kylin.common.persistence.ResourceStore:
> Using metadata url /home/data/ha/hadoop/data/tmp/
> nm-local-dir/usercache/root/appcache/application_
> 1486537920487_0014/container_1486537920487_0014_01_000001/meta for
> resource
> store
> 2017-02-08 16:34:19,568 INFO [uber-SubtaskRunner]
> org.apache.kylin.cube.CubeManager:
> Loading Cube from folder /home/data/ha/hadoop/data/tmp/
> nm-local-dir/usercache/root/appcache/application_
> 1486537920487_0014/container_1486537920487_0014_01_000001/meta/cube
> 2017-02-08 16:34:20,273 INFO [uber-SubtaskRunner]
> org.apache.kylin.cube.CubeDescManager:
> Initializing CubeDescManager with config /home/data/ha/hadoop/data/tmp/
> nm-local-dir/usercache/root/appcache/application_
> 1486537920487_0014/container_1486537920487_0014_01_000001/meta
> 2017-02-08 16:34:20,273 INFO [uber-SubtaskRunner]
> org.apache.kylin.cube.CubeDescManager:
> Reloading Cube Metadata from folder /home/data/ha/hadoop/data/tmp/
> nm-local-dir/usercache/root/appcache/application_
> 1486537920487_0014/container_1486537920487_0014_01_000001/meta/cube_desc
> 2017-02-08 16:34:20,378 INFO [uber-SubtaskRunner]
> org.apache.kylin.measure.MeasureTypeFactory:
> Checking custom measure types from kylin config
> 2017-02-08 16:34:20,382 INFO [uber-SubtaskRunner]
> org.apache.kylin.measure.MeasureTypeFactory:
> registering COUNT_DISTINCT(hllc), class org.apache.kylin.measure.hllc.
> HLLCMeasureType$Factory
> 2017-02-08 16:34:20,392 INFO [uber-SubtaskRunner]
> org.apache.kylin.measure.MeasureTypeFactory:
> registering COUNT_DISTINCT(bitmap), class org.apache.kylin.measure.
> bitmap.BitmapMeasureType$Factory
> 2017-02-08 16:34:20,401 INFO [uber-SubtaskRunner]
> org.apache.kylin.measure.MeasureTypeFactory:
> registering TOP_N(topn), class org.apache.kylin.measure.topn.
> TopNMeasureType$Factory
> 2017-02-08 16:34:20,405 INFO [uber-SubtaskRunner]
> org.apache.kylin.measure.MeasureTypeFactory:
> registering RAW(raw), class org.apache.kylin.measure.raw.
> RawMeasureType$Factory
> 2017-02-08 16:34:20,407 INFO [uber-SubtaskRunner]
> org.apache.kylin.measure.MeasureTypeFactory:
> registering EXTENDED_COLUMN(extendedcolumn), class
> org.apache.kylin.measure.
> extendedcolumn.ExtendedColumnMeasureType$Factory
> 2017-02-08 16:34:20,465 INFO [uber-SubtaskRunner]
> org.apache.kylin.cube.CubeDescManager:
> Loaded 1 Cube(s)
> 2017-02-08 16:34:20,466 INFO [uber-SubtaskRunner]
> org.apache.kylin.cube.CubeManager:
> Reloaded cube kylin_sales_cube being CUBE[name=kylin_sales_cube] having 1
> segments
> 2017-02-08 16:34:20,466 INFO [uber-SubtaskRunner]
> org.apache.kylin.cube.CubeManager:
> Loaded 1 cubes, fail on 0 cubes
> 2017-02-08 16:34:21,335 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000000_0 is : 0.0
> 2017-02-08 16:34:21,345 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Starting flush of map output
> 2017-02-08 16:34:21,345 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Spilling map output
> 2017-02-08 16:34:21,345 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufend = 11527344; bufvoid = 104857600
> 2017-02-08 16:34:21,345 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396(104857584); kvend = 25769392(103077568); length =
> 445005/6553600
> 2017-02-08 16:34:21,770 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Finished spill 0
> 2017-02-08 16:34:21,777 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task:attempt_1486537920487_0014_m_000000_0
> is done. And is in the process of committing
> 2017-02-08 16:34:21,866 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000000_0 is : 1.0
> 2017-02-08 16:34:21,871 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Done acknowledgement from attempt_1486537920487_0014_m_000000_0
> 2017-02-08 16:34:21,873 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task 'attempt_1486537920487_0014_m_
> 000000_0'
> done.
> 2017-02-08 16:34:21,878 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000000_0 TaskAttempt Transitioned from
> RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2017-02-08 16:34:21,879 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000000_0
> 2017-02-08 16:34:21,879 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> canceling the task attempt attempt_1486537920487_0014_m_000000_0
> 2017-02-08 16:34:21,881 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> mapreduce.cluster.local.dir for uber task: /data/ha/hadoop/data/tmp/nm-
> local-dir/usercache/root/appcache/application_1486537920487_0014
> 2017-02-08 16:34:21,883 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000000_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2017-02-08 16:34:21,884 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1486537920487_0014_m_000001_0] using containerId:
> [container_1486537920487_0014_01_000001 on NM: [hdslave1:58379]
> 2017-02-08 16:34:21,885 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000001_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
> 2017-02-08 16:34:21,886 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
> 2017-02-08 16:34:21,902 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1486537920487_0014_m_000000_0
> 2017-02-08 16:34:21,904 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000000 Task Transitioned from RUNNING to
> SUCCEEDED
> 2017-02-08 16:34:21,904 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000001 Task Transitioned from SCHEDULED to
> RUNNING
> 2017-02-08 16:34:21,907 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks:
> 1
> 2017-02-08 16:34:21,937 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Processing split: hdfs://cluster/kylin/kylin_metadata/kylin-983f0299-6dfc-
> 4100-bfc7-c048be622ef3/kylin_sales_cube/cuboid/6d_cuboid/
> part-r-00000:0+4448187
> 2017-02-08 16:34:21,974 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> (EQUATOR) 0 kvi 26214396(104857584)
> 2017-02-08 16:34:21,974 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> mapreduce.task.io.sort.mb: 100
> 2017-02-08 16:34:21,974 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> soft limit at 83886080
> 2017-02-08 16:34:21,974 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufvoid = 104857600
> 2017-02-08 16:34:21,974 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396; length = 6553600
> 2017-02-08 16:34:21,975 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Map output collector class = org.apache.hadoop.mapred.
> MapTask$MapOutputBuffer
> 2017-02-08 16:34:21,987 INFO [uber-SubtaskRunner]
> org.apache.kylin.engine.mr
> .KylinMapper: The conf for current mapper will be 189881114
> 2017-02-08 16:34:22,812 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000001_0 is : 0.0
> 2017-02-08 16:34:22,813 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Starting flush of map output
> 2017-02-08 16:34:22,813 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Spilling map output
> 2017-02-08 16:34:22,813 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufend = 10427687; bufvoid = 104857600
> 2017-02-08 16:34:22,813 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396(104857584); kvend = 25820992(103283968); length =
> 393405/6553600
> 2017-02-08 16:34:23,085 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Finished spill 0
> 2017-02-08 16:34:23,088 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task:attempt_1486537920487_0014_m_000001_0
> is done. And is in the process of committing
> 2017-02-08 16:34:23,147 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000001_0 is : 1.0
> 2017-02-08 16:34:23,148 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Done acknowledgement from attempt_1486537920487_0014_m_000001_0
> 2017-02-08 16:34:23,151 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task 'attempt_1486537920487_0014_m_
> 000001_0'
> done.
> 2017-02-08 16:34:23,152 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000001_0 TaskAttempt Transitioned from
> RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2017-02-08 16:34:23,154 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000001_0
> 2017-02-08 16:34:23,154 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> canceling the task attempt attempt_1486537920487_0014_m_000001_0
> 2017-02-08 16:34:23,154 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000001_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2017-02-08 16:34:23,155 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1486537920487_0014_m_000001_0
> 2017-02-08 16:34:23,155 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000001 Task Transitioned from RUNNING to
> SUCCEEDED
> 2017-02-08 16:34:23,155 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks:
> 2
> 2017-02-08 16:34:23,156 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1486537920487_0014_m_000002_0] using containerId:
> [container_1486537920487_0014_01_000001 on NM: [hdslave1:58379]
> 2017-02-08 16:34:23,156 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000002_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
> 2017-02-08 16:34:23,157 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000002 Task Transitioned from SCHEDULED to
> RUNNING
> 2017-02-08 16:34:23,157 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> mapreduce.cluster.local.dir for uber task: /data/ha/hadoop/data/tmp/nm-
> local-dir/usercache/root/appcache/application_1486537920487_0014
> 2017-02-08 16:34:23,160 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
> 2017-02-08 16:34:23,195 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Processing split: hdfs://cluster/kylin/kylin_metadata/kylin-983f0299-6dfc-
> 4100-bfc7-c048be622ef3/kylin_sales_cube/cuboid/4d_cuboid/
> part-r-00000:0+3989758
> 2017-02-08 16:34:23,235 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> (EQUATOR) 0 kvi 26214396(104857584)
> 2017-02-08 16:34:23,235 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> mapreduce.task.io.sort.mb: 100
> 2017-02-08 16:34:23,235 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> soft limit at 83886080
> 2017-02-08 16:34:23,235 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufvoid = 104857600
> 2017-02-08 16:34:23,235 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396; length = 6553600
> 2017-02-08 16:34:23,236 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Map output collector class = org.apache.hadoop.mapred.
> MapTask$MapOutputBuffer
> 2017-02-08 16:34:23,244 INFO [uber-SubtaskRunner]
> org.apache.kylin.engine.mr
> .KylinMapper: The conf for current mapper will be 771934936
> 2017-02-08 16:34:23,814 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000002_0 is : 0.0
> 2017-02-08 16:34:23,815 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Starting flush of map output
> 2017-02-08 16:34:23,815 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Spilling map output
> 2017-02-08 16:34:23,815 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufend = 8693887; bufvoid = 104857600
> 2017-02-08 16:34:23,815 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396(104857584); kvend = 25871696(103486784); length =
> 342701/6553600
> 2017-02-08 16:34:24,035 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Finished spill 0
> 2017-02-08 16:34:24,054 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task:attempt_1486537920487_0014_m_000002_0
> is done. And is in the process of committing
> 2017-02-08 16:34:24,106 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000002_0 is : 1.0
> 2017-02-08 16:34:24,108 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Done acknowledgement from attempt_1486537920487_0014_m_000002_0
> 2017-02-08 16:34:24,110 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task 'attempt_1486537920487_0014_m_
> 000002_0'
> done.
> 2017-02-08 16:34:24,113 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000002_0 TaskAttempt Transitioned from
> RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2017-02-08 16:34:24,114 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000002_0
> 2017-02-08 16:34:24,114 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> canceling the task attempt attempt_1486537920487_0014_m_000002_0
> 2017-02-08 16:34:24,115 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000002_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2017-02-08 16:34:24,115 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> mapreduce.cluster.local.dir for uber task: /data/ha/hadoop/data/tmp/nm-
> local-dir/usercache/root/appcache/application_1486537920487_0014
> 2017-02-08 16:34:24,116 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1486537920487_0014_m_000002_0
> 2017-02-08 16:34:24,116 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000002 Task Transitioned from RUNNING to
> SUCCEEDED
> 2017-02-08 16:34:24,116 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1486537920487_0014_m_000003_0] using containerId:
> [container_1486537920487_0014_01_000001 on NM: [hdslave1:58379]
> 2017-02-08 16:34:24,119 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000003_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
> 2017-02-08 16:34:24,120 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks:
> 3
> 2017-02-08 16:34:24,120 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
> 2017-02-08 16:34:24,122 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000003 Task Transitioned from SCHEDULED to
> RUNNING
> 2017-02-08 16:34:24,156 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Processing split: hdfs://cluster/kylin/kylin_metadata/kylin-983f0299-6dfc-
> 4100-bfc7-c048be622ef3/kylin_sales_cube/cuboid/7d_cuboid/
> part-r-00000:0+2537582
> 2017-02-08 16:34:24,193 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> (EQUATOR) 0 kvi 26214396(104857584)
> 2017-02-08 16:34:24,193 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> mapreduce.task.io.sort.mb: 100
> 2017-02-08 16:34:24,193 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> soft limit at 83886080
> 2017-02-08 16:34:24,193 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufvoid = 104857600
> 2017-02-08 16:34:24,193 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396; length = 6553600
> 2017-02-08 16:34:24,194 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Map output collector class = org.apache.hadoop.mapred.
> MapTask$MapOutputBuffer
> 2017-02-08 16:34:24,204 INFO [uber-SubtaskRunner]
> org.apache.kylin.engine.mr
> .KylinMapper: The conf for current mapper will be 1682387687
> 2017-02-08 16:34:24,403 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000003_0 is : 0.0
> 2017-02-08 16:34:24,404 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Starting flush of map output
> 2017-02-08 16:34:24,404 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Spilling map output
> 2017-02-08 16:34:24,404 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufend = 6165466; bufvoid = 104857600
> 2017-02-08 16:34:24,404 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396(104857584); kvend = 25991496(103965984); length =
> 222901/6553600
> 2017-02-08 16:34:24,552 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Finished spill 0
> 2017-02-08 16:34:24,555 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task:attempt_1486537920487_0014_m_000003_0
> is done. And is in the process of committing
> 2017-02-08 16:34:24,609 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000003_0 is : 1.0
> 2017-02-08 16:34:24,611 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Done acknowledgement from attempt_1486537920487_0014_m_000003_0
> 2017-02-08 16:34:24,611 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task 'attempt_1486537920487_0014_m_
> 000003_0'
> done.
> 2017-02-08 16:34:24,616 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000003_0 TaskAttempt Transitioned from
> RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2017-02-08 16:34:24,617 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000003_0
> 2017-02-08 16:34:24,617 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> canceling the task attempt attempt_1486537920487_0014_m_000003_0
> 2017-02-08 16:34:24,617 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000003_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2017-02-08 16:34:24,618 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1486537920487_0014_m_000003_0
> 2017-02-08 16:34:24,619 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000003 Task Transitioned from RUNNING to
> SUCCEEDED
> 2017-02-08 16:34:24,619 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks:
> 4
> 2017-02-08 16:34:24,619 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> mapreduce.cluster.local.dir for uber task: /data/ha/hadoop/data/tmp/nm-
> local-dir/usercache/root/appcache/application_1486537920487_0014
> 2017-02-08 16:34:24,624 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1486537920487_0014_m_000004_0] using containerId:
> [container_1486537920487_0014_01_000001 on NM: [hdslave1:58379]
> 2017-02-08 16:34:24,624 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000004_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
> 2017-02-08 16:34:24,624 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
> 2017-02-08 16:34:24,625 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000004 Task Transitioned from SCHEDULED to
> RUNNING
> 2017-02-08 16:34:24,656 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Processing split: hdfs://cluster/kylin/kylin_metadata/kylin-983f0299-6dfc-
> 4100-bfc7-c048be622ef3/kylin_sales_cube/cuboid/3d_cuboid/
> part-r-00000:0+1670211
> 2017-02-08 16:34:24,700 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> (EQUATOR) 0 kvi 26214396(104857584)
> 2017-02-08 16:34:24,700 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> mapreduce.task.io.sort.mb: 100
> 2017-02-08 16:34:24,700 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> soft limit at 83886080
> 2017-02-08 16:34:24,700 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufvoid = 104857600
> 2017-02-08 16:34:24,700 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396; length = 6553600
> 2017-02-08 16:34:24,702 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Map output collector class = org.apache.hadoop.mapred.
> MapTask$MapOutputBuffer
> 2017-02-08 16:34:24,714 INFO [uber-SubtaskRunner]
> org.apache.kylin.engine.mr
> .KylinMapper: The conf for current mapper will be 1857508179
> 2017-02-08 16:34:24,899 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000004_0 is : 0.0
> 2017-02-08 16:34:24,900 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Starting flush of map output
> 2017-02-08 16:34:24,900 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Spilling map output
> 2017-02-08 16:34:24,900 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufend = 3249222; bufvoid = 104857600
> 2017-02-08 16:34:24,900 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396(104857584); kvend = 26087080(104348320); length =
> 127317/6553600
> 2017-02-08 16:34:24,995 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Finished spill 0
> 2017-02-08 16:34:24,997 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task:attempt_1486537920487_0014_m_000004_0
> is done. And is in the process of committing
> 2017-02-08 16:34:25,050 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000004_0 is : 1.0
> 2017-02-08 16:34:25,051 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Done acknowledgement from attempt_1486537920487_0014_m_000004_0
> 2017-02-08 16:34:25,051 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task 'attempt_1486537920487_0014_m_
> 000004_0'
> done.
> 2017-02-08 16:34:25,057 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000004_0 TaskAttempt Transitioned from
> RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2017-02-08 16:34:25,057 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000004_0
> 2017-02-08 16:34:25,058 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> canceling the task attempt attempt_1486537920487_0014_m_000004_0
> 2017-02-08 16:34:25,058 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000004_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2017-02-08 16:34:25,058 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> mapreduce.cluster.local.dir for uber task: /data/ha/hadoop/data/tmp/nm-
> local-dir/usercache/root/appcache/application_1486537920487_0014
> 2017-02-08 16:34:25,059 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1486537920487_0014_m_000005_0] using containerId:
> [container_1486537920487_0014_01_000001 on NM: [hdslave1:58379]
> 2017-02-08 16:34:25,060 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000005_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
> 2017-02-08 16:34:25,062 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1486537920487_0014_m_000004_0
> 2017-02-08 16:34:25,062 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
> 2017-02-08 16:34:25,062 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000004 Task Transitioned from RUNNING to
> SUCCEEDED
> 2017-02-08 16:34:25,064 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000005 Task Transitioned from SCHEDULED to
> RUNNING
> 2017-02-08 16:34:25,064 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks:
> 5
> 2017-02-08 16:34:25,097 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Processing split: hdfs://cluster/kylin/kylin_metadata/kylin-983f0299-6dfc-
> 4100-bfc7-c048be622ef3/kylin_sales_cube/cuboid/base_cuboid/
> part-r-00000:0+659625
> 2017-02-08 16:34:25,153 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> (EQUATOR) 0 kvi 26214396(104857584)
> 2017-02-08 16:34:25,153 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> mapreduce.task.io.sort.mb: 100
> 2017-02-08 16:34:25,153 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> soft limit at 83886080
> 2017-02-08 16:34:25,153 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufvoid = 104857600
> 2017-02-08 16:34:25,153 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396; length = 6553600
> 2017-02-08 16:34:25,154 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Map output collector class = org.apache.hadoop.mapred.
> MapTask$MapOutputBuffer
> 2017-02-08 16:34:25,162 INFO [uber-SubtaskRunner]
> org.apache.kylin.engine.mr
> .KylinMapper: The conf for current mapper will be 2003502393
> <020%200350%202393>
> 2017-02-08 16:34:25,204 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000005_0 is : 0.0
> 2017-02-08 16:34:25,204 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Starting flush of map output
> 2017-02-08 16:34:25,204 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Spilling map output
> 2017-02-08 16:34:25,205 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufend = 1683307; bufvoid = 104857600
> 2017-02-08 16:34:25,205 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396(104857584); kvend = 26158720(104634880); length =
> 55677/6553600
> 2017-02-08 16:34:25,248 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Finished spill 0
> 2017-02-08 16:34:25,251 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task:attempt_1486537920487_0014_m_000005_0
> is done. And is in the process of committing
> 2017-02-08 16:34:25,309 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000005_0 is : 1.0
> 2017-02-08 16:34:25,311 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Done acknowledgement from attempt_1486537920487_0014_m_000005_0
> 2017-02-08 16:34:25,313 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task 'attempt_1486537920487_0014_m_
> 000005_0'
> done.
> 2017-02-08 16:34:25,315 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000005_0 TaskAttempt Transitioned from
> RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2017-02-08 16:34:25,315 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000005_0
> 2017-02-08 16:34:25,316 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> canceling the task attempt attempt_1486537920487_0014_m_000005_0
> 2017-02-08 16:34:25,316 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000005_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2017-02-08 16:34:25,317 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1486537920487_0014_m_000005_0
> 2017-02-08 16:34:25,317 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> mapreduce.cluster.local.dir for uber task: /data/ha/hadoop/data/tmp/nm-
> local-dir/usercache/root/appcache/application_1486537920487_0014
> 2017-02-08 16:34:25,317 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000005 Task Transitioned from RUNNING to
> SUCCEEDED
> 2017-02-08 16:34:25,319 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1486537920487_0014_m_000006_0] using containerId:
> [container_1486537920487_0014_01_000001 on NM: [hdslave1:58379]
> 2017-02-08 16:34:25,322 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000006_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
> 2017-02-08 16:34:25,323 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
> 2017-02-08 16:34:25,323 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks:
> 6
> 2017-02-08 16:34:25,324 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000006 Task Transitioned from SCHEDULED to
> RUNNING
> 2017-02-08 16:34:25,353 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Processing split: hdfs://cluster/kylin/kylin_metadata/kylin-983f0299-6dfc-
> 4100-bfc7-c048be622ef3/kylin_sales_cube/cuboid/8d_cuboid/
> part-r-00000:0+639806
> 2017-02-08 16:34:25,388 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> (EQUATOR) 0 kvi 26214396(104857584)
> 2017-02-08 16:34:25,389 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> mapreduce.task.io.sort.mb: 100
> 2017-02-08 16:34:25,389 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> soft limit at 83886080
> 2017-02-08 16:34:25,389 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufvoid = 104857600
> 2017-02-08 16:34:25,389 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396; length = 6553600
> 2017-02-08 16:34:25,391 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Map output collector class = org.apache.hadoop.mapred.
> MapTask$MapOutputBuffer
> 2017-02-08 16:34:25,402 INFO [uber-SubtaskRunner]
> org.apache.kylin.engine.mr
> .KylinMapper: The conf for current mapper will be 967401861
> 2017-02-08 16:34:25,459 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000006_0 is : 0.0
> 2017-02-08 16:34:25,459 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Starting flush of map output
> 2017-02-08 16:34:25,459 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Spilling map output
> 2017-02-08 16:34:25,461 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufend = 1611280; bufvoid = 104857600
> 2017-02-08 16:34:25,461 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396(104857584); kvend = 26159312(104637248); length =
> 55085/6553600
> 2017-02-08 16:34:25,506 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Finished spill 0
> 2017-02-08 16:34:25,509 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task:attempt_1486537920487_0014_m_000006_0
> is done. And is in the process of committing
> 2017-02-08 16:34:25,581 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000006_0 is : 1.0
> 2017-02-08 16:34:25,582 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Done acknowledgement from attempt_1486537920487_0014_m_000006_0
> 2017-02-08 16:34:25,582 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task 'attempt_1486537920487_0014_m_
> 000006_0'
> done.
> 2017-02-08 16:34:25,587 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000006_0 TaskAttempt Transitioned from
> RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2017-02-08 16:34:25,587 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000006_0
> 2017-02-08 16:34:25,587 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> canceling the task attempt attempt_1486537920487_0014_m_000006_0
> 2017-02-08 16:34:25,588 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000006_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2017-02-08 16:34:25,589 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1486537920487_0014_m_000006_0
> 2017-02-08 16:34:25,589 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000006 Task Transitioned from RUNNING to
> SUCCEEDED
> 2017-02-08 16:34:25,589 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks:
> 7
> 2017-02-08 16:34:25,589 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> mapreduce.cluster.local.dir for uber task: /data/ha/hadoop/data/tmp/nm-
> local-dir/usercache/root/appcache/application_1486537920487_0014
> 2017-02-08 16:34:25,596 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1486537920487_0014_m_000007_0] using containerId:
> [container_1486537920487_0014_01_000001 on NM: [hdslave1:58379]
> 2017-02-08 16:34:25,596 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000007_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
> 2017-02-08 16:34:25,596 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000007 Task Transitioned from SCHEDULED to
> RUNNING
> 2017-02-08 16:34:25,604 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
> 2017-02-08 16:34:25,648 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Processing split: hdfs://cluster/kylin/kylin_metadata/kylin-983f0299-6dfc-
> 4100-bfc7-c048be622ef3/kylin_sales_cube/cuboid/2d_cuboid/
> part-r-00000:0+217458
> 2017-02-08 16:34:25,701 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> (EQUATOR) 0 kvi 26214396(104857584)
> 2017-02-08 16:34:25,701 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> mapreduce.task.io.sort.mb: 100
> 2017-02-08 16:34:25,701 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> soft limit at 83886080
> 2017-02-08 16:34:25,701 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufvoid = 104857600
> 2017-02-08 16:34:25,701 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396; length = 6553600
> 2017-02-08 16:34:25,703 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Map output collector class = org.apache.hadoop.mapred.
> MapTask$MapOutputBuffer
> 2017-02-08 16:34:25,711 INFO [uber-SubtaskRunner]
> org.apache.kylin.engine.mr
> .KylinMapper: The conf for current mapper will be 45404665
> 2017-02-08 16:34:25,736 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000007_0 is : 0.0
> 2017-02-08 16:34:25,736 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Starting flush of map output
> 2017-02-08 16:34:25,736 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Spilling map output
> 2017-02-08 16:34:25,736 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufend = 277718; bufvoid = 104857600
> 2017-02-08 16:34:25,737 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396(104857584); kvend = 26208792(104835168); length =
> 5605/6553600
> 2017-02-08 16:34:25,746 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Finished spill 0
> 2017-02-08 16:34:25,748 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task:attempt_1486537920487_0014_m_000007_0
> is done. And is in the process of committing
> 2017-02-08 16:34:25,808 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000007_0 is : 1.0
> 2017-02-08 16:34:25,809 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Done acknowledgement from attempt_1486537920487_0014_m_000007_0
> 2017-02-08 16:34:25,810 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task 'attempt_1486537920487_0014_m_
> 000007_0'
> done.
> 2017-02-08 16:34:25,815 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000007_0 TaskAttempt Transitioned from
> RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2017-02-08 16:34:25,815 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000007_0
> 2017-02-08 16:34:25,815 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> canceling the task attempt attempt_1486537920487_0014_m_000007_0
> 2017-02-08 16:34:25,815 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000007_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2017-02-08 16:34:25,816 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1486537920487_0014_m_000007_0
> 2017-02-08 16:34:25,816 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000007 Task Transitioned from RUNNING to
> SUCCEEDED
> 2017-02-08 16:34:25,816 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks:
> 8
> 2017-02-08 16:34:25,819 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1486537920487_0014_m_000008_0] using containerId:
> [container_1486537920487_0014_01_000001 on NM: [hdslave1:58379]
> 2017-02-08 16:34:25,819 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000008_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
> 2017-02-08 16:34:25,820 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> mapreduce.cluster.local.dir for uber task: /data/ha/hadoop/data/tmp/nm-
> local-dir/usercache/root/appcache/application_1486537920487_0014
> 2017-02-08 16:34:25,820 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000008 Task Transitioned from SCHEDULED to
> RUNNING
> 2017-02-08 16:34:25,825 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
> 2017-02-08 16:34:25,854 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Processing split: hdfs://cluster/kylin/kylin_metadata/kylin-983f0299-6dfc-
> 4100-bfc7-c048be622ef3/kylin_sales_cube/cuboid/1d_cuboid/part-r-00000:0+78
> 2017-02-08 16:34:25,890 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> (EQUATOR) 0 kvi 26214396(104857584)
> 2017-02-08 16:34:25,890 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> mapreduce.task.io.sort.mb: 100
> 2017-02-08 16:34:25,890 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> soft limit at 83886080
> 2017-02-08 16:34:25,890 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> bufstart = 0; bufvoid = 104857600
> 2017-02-08 16:34:25,890 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> kvstart = 26214396; length = 6553600
> 2017-02-08 16:34:25,891 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Map output collector class = org.apache.hadoop.mapred.
> MapTask$MapOutputBuffer
> 2017-02-08 16:34:25,899 INFO [uber-SubtaskRunner]
> org.apache.kylin.engine.mr
> .KylinMapper: The conf for current mapper will be 471051897
> 2017-02-08 16:34:25,903 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000008_0 is : 0.0
> 2017-02-08 16:34:25,904 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.MapTask:
> Starting flush of map output
> 2017-02-08 16:34:25,911 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task:attempt_1486537920487_0014_m_000008_0
> is done. And is in the process of committing
> 2017-02-08 16:34:25,963 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_m_000008_0 is : 1.0
> 2017-02-08 16:34:25,964 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Done acknowledgement from attempt_1486537920487_0014_m_000008_0
> 2017-02-08 16:34:25,964 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task: Task 'attempt_1486537920487_0014_m_
> 000008_0'
> done.
> 2017-02-08 16:34:25,969 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000008_0 TaskAttempt Transitioned from
> RUNNING
> to SUCCESS_CONTAINER_CLEANUP
> 2017-02-08 16:34:25,970 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_m_000008_0
> 2017-02-08 16:34:25,970 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> canceling the task attempt attempt_1486537920487_0014_m_000008_0
> 2017-02-08 16:34:25,970 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_m_000008_0 TaskAttempt Transitioned from
> SUCCESS_CONTAINER_CLEANUP to SUCCEEDED
> 2017-02-08 16:34:25,970 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with
> attempt attempt_1486537920487_0014_m_000008_0
> 2017-02-08 16:34:25,970 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_m_000008 Task Transitioned from RUNNING to
> SUCCEEDED
> 2017-02-08 16:34:25,971 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks:
> 9
> 2017-02-08 16:34:25,973 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1486537920487_0014_r_000000_0] using containerId:
> [container_1486537920487_0014_01_000001 on NM: [hdslave1:58379]
> 2017-02-08 16:34:25,974 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_r_000000_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
> 2017-02-08 16:34:25,974 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_r_000000 Task Transitioned from SCHEDULED to
> RUNNING
> 2017-02-08 16:34:25,974 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> mapreduce.cluster.local.dir for uber task: /data/ha/hadoop/data/tmp/nm-
> local-dir/usercache/root/appcache/application_1486537920487_0014
> 2017-02-08 16:34:25,979 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
> 2017-02-08 16:34:26,011 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.ReduceTask:
> Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.
> task.reduce.Shuffle@72a4822e
> 2017-02-08 16:34:26,024 INFO [uber-SubtaskRunner]
> org.apache.hadoop.conf.Configuration.deprecation:
> session.id is deprecated. Instead, use dfs.metrics.session-id
> 2017-02-08 16:34:26,043 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: MergerManager:
> memoryLimit=668309888, maxSingleShuffleLimit=167077472,
> mergeThreshold=441084544, ioSortFactor=10, memToMemMergeOutputsThreshold=
> 10
> 2017-02-08 16:34:26,049 INFO [EventFetcher for fetching Map Completion
> Events] org.apache.hadoop.mapreduce.task.reduce.EventFetcher:
> attempt_1486537920487_0014_r_000000_0 Thread started: EventFetcher for
> fetching Map Completion Events
> 2017-02-08 16:34:26,049 INFO [EventFetcher for fetching Map Completion
> Events] org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> MapCompletionEvents request from attempt_1486537920487_0014_r_000000_0.
> startIndex 0 maxEvents 10000
> 2017-02-08 16:34:26,057 INFO [EventFetcher for fetching Map Completion
> Events] org.apache.hadoop.mapreduce.task.reduce.EventFetcher:
> attempt_1486537920487_0014_r_000000_0: Got 9 new map-outputs
> 2017-02-08 16:34:26,121 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.LocalFetcher:
> localfetcher#1 about to shuffle output of map
> attempt_1486537920487_0014_m_000000_0
> decomp: 11752414 len: 11752418 to MEMORY
> 2017-02-08 16:34:26,182 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput:
> Read 11752414 bytes from map-output for attempt_1486537920487_0014_m_
> 000000_0
> 2017-02-08 16:34:26,182 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl:
> closeInMemoryFile -> map-output of size: 11752414,
> inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->11752414
> 2017-02-08 16:34:26,188 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.LocalFetcher:
> localfetcher#1 about to shuffle output of map
> attempt_1486537920487_0014_m_000003_0
> decomp: 6277344 len: 6277348 to MEMORY
> 2017-02-08 16:34:26,208 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput:
> Read 6277344 bytes from map-output for attempt_1486537920487_0014_m_
> 000003_0
> 2017-02-08 16:34:26,208 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl:
> closeInMemoryFile -> map-output of size: 6277344, inMemoryMapOutputs.size()
> -> 2, commitMemory -> 11752414, usedMemory ->18029758
> 2017-02-08 16:34:26,210 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.LocalFetcher:
> localfetcher#1 about to shuffle output of map
> attempt_1486537920487_0014_m_000006_0
> decomp: 1638826 len: 1638830 to MEMORY
> 2017-02-08 16:34:26,216 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput:
> Read 1638826 bytes from map-output for attempt_1486537920487_0014_m_
> 000006_0
> 2017-02-08 16:34:26,217 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl:
> closeInMemoryFile -> map-output of size: 1638826, inMemoryMapOutputs.size()
> -> 3, commitMemory -> 18029758, usedMemory ->19668584
> 2017-02-08 16:34:26,219 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.LocalFetcher:
> localfetcher#1 about to shuffle output of map
> attempt_1486537920487_0014_m_000004_0
> decomp: 3314905 len: 3314909 to MEMORY
> 2017-02-08 16:34:26,232 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput:
> Read 3314905 bytes from map-output for attempt_1486537920487_0014_m_
> 000004_0
> 2017-02-08 16:34:26,232 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl:
> closeInMemoryFile -> map-output of size: 3314905, inMemoryMapOutputs.size()
> -> 4, commitMemory -> 19668584, usedMemory ->22983489
> 2017-02-08 16:34:26,234 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.LocalFetcher:
> localfetcher#1 about to shuffle output of map
> attempt_1486537920487_0014_m_000007_0
> decomp: 281326 len: 281330 to MEMORY
> 2017-02-08 16:34:26,235 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput:
> Read 281326 bytes from map-output for attempt_1486537920487_0014_m_
> 000007_0
> 2017-02-08 16:34:26,235 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl:
> closeInMemoryFile -> map-output of size: 281326, inMemoryMapOutputs.size()
> -> 5, commitMemory -> 22983489, usedMemory ->23264815
> 2017-02-08 16:34:26,236 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.LocalFetcher:
> localfetcher#1 about to shuffle output of map
> attempt_1486537920487_0014_m_000008_0
> decomp: 2 len: 6 to MEMORY
> 2017-02-08 16:34:26,236 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput:
> Read 2 bytes from map-output for attempt_1486537920487_0014_m_000008_0
> 2017-02-08 16:34:26,236 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl:
> closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 6,
> commitMemory -> 23264815, usedMemory ->23264817
> 2017-02-08 16:34:26,246 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.LocalFetcher:
> localfetcher#1 about to shuffle output of map
> attempt_1486537920487_0014_m_000001_0
> decomp: 10625932 len: 10625936 to MEMORY
> 2017-02-08 16:34:26,281 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput:
> Read 10625932 bytes from map-output for attempt_1486537920487_0014_m_
> 000001_0
> 2017-02-08 16:34:26,281 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl:
> closeInMemoryFile -> map-output of size: 10625932,
> inMemoryMapOutputs.size() -> 7, commitMemory -> 23264817, usedMemory
> ->33890749
> 2017-02-08 16:34:26,295 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.LocalFetcher:
> localfetcher#1 about to shuffle output of map
> attempt_1486537920487_0014_m_000002_0
> decomp: 8867887 len: 8867891 to MEMORY
> 2017-02-08 16:34:26,328 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput:
> Read 8867887 bytes from map-output for attempt_1486537920487_0014_m_
> 000002_0
> 2017-02-08 16:34:26,328 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl:
> closeInMemoryFile -> map-output of size: 8867887, inMemoryMapOutputs.size()
> -> 8, commitMemory -> 33890749, usedMemory ->42758636
> 2017-02-08 16:34:26,330 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.LocalFetcher:
> localfetcher#1 about to shuffle output of map
> attempt_1486537920487_0014_m_000005_0
> decomp: 1711149 len: 1711153 to MEMORY
> 2017-02-08 16:34:26,336 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput:
> Read 1711149 bytes from map-output for attempt_1486537920487_0014_m_
> 000005_0
> 2017-02-08 16:34:26,336 INFO [localfetcher#1]
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl:
> closeInMemoryFile -> map-output of size: 1711149, inMemoryMapOutputs.size()
> -> 9, commitMemory -> 42758636, usedMemory ->44469785
> 2017-02-08 16:34:26,337 INFO [EventFetcher for fetching Map Completion
> Events] org.apache.hadoop.mapreduce.task.reduce.EventFetcher: EventFetcher
> is interrupted.. Returning
> 2017-02-08 16:34:26,338 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_r_000000_0 is : 0.0
> 2017-02-08 16:34:26,341 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: finalMerge
> called
> with 9 in-memory map-outputs and 0 on-disk map-outputs
> 2017-02-08 16:34:26,352 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Merger:
> Merging 9 sorted segments
> 2017-02-08 16:34:26,353 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Merger:
> Down to the last merge-pass, with 8 segments left of total size: 44469531
> bytes
> 2017-02-08 16:34:27,358 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: Merged 9
> segments, 44469785 bytes to disk to satisfy reduce memory limit
> 2017-02-08 16:34:27,359 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: Merging 1 files,
> 44469773 bytes from disk
> 2017-02-08 16:34:27,360 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl: Merging 0
> segments, 0 bytes from memory into reduce
> 2017-02-08 16:34:27,360 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Merger:
> Merging 1 sorted segments
> 2017-02-08 16:34:27,361 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Merger:
> Down to the last merge-pass, with 1 segments left of total size: 44469750
> bytes
> 2017-02-08 16:34:27,361 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Progress of TaskAttempt attempt_1486537920487_0014_r_000000_0 is : 0.0
> 2017-02-08 16:34:27,532 INFO [uber-SubtaskRunner]
> org.apache.hadoop.conf.Configuration.deprecation:
> mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
> 2017-02-08 16:34:27,660 INFO [uber-SubtaskRunner]
> org.apache.hadoop.hbase.io
> .hfile.CacheConfig: CacheConfig:disabled
> 2017-02-08 16:34:27,748 INFO [uber-SubtaskRunner]
> org.apache.hadoop.hbase.io
> .hfile.CacheConfig: CacheConfig:disabled
> 2017-02-08 16:34:28,318 ERROR [uber-SubtaskRunner]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Error running local (uberized) 'child' : java.lang.NoSuchMethodError:
> org.apache.hadoop.hbase.util.ChecksumType.getChecksumObject()Ljava/util/
> zip/Checksum;
>     at org.apache.hadoop.hbase.io.hfile.ChecksumUtil.generateChecksums(
> ChecksumUtil.java:73)
>     at org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.
> finishBlock(HFileBlock.java:943)
>     at org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.
> ensureBlockReady(HFileBlock.java:895)
>     at org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.
> finishBlockAndWriteHeaderAndData(HFileBlock.java:1011)
>     at org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.
> writeHeaderAndData(HFileBlock.java:997)
>     at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexWriter.
> writeIndexBlocks(HFileBlockIndex.java:883)
>     at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.close(
> HFileWriterV2.java:331)
>     at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.
> close(StoreFile.java:996)
>     at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2$
> 1.close(HFileOutputFormat2.java:269)
>     at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2$
> 1.close(HFileOutputFormat2.java:277)
>     at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(
> ReduceTask.java:550)
>     at org.apache.hadoop.mapred.ReduceTask.runNewReducer(
> ReduceTask.java:629)
>     at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
>     at org.apache.hadoop.mapred.LocalContainerLauncher$
> EventHandler.runSubtask(LocalContainerLauncher.java:404)
>     at org.apache.hadoop.mapred.LocalContainerLauncher$
> EventHandler.runTask(
> LocalContainerLauncher.java:295)
>     at org.apache.hadoop.mapred.LocalContainerLauncher$
> EventHandler.access$200(LocalContainerLauncher.java:181)
>     at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler$1.run(
> LocalContainerLauncher.java:224)
>     at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
>
> 2017-02-08 16:34:28,318 ERROR [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Task: attempt_1486537920487_0014_r_000000_0 - exited :
> org.apache.hadoop.hbase.util.ChecksumType.getChecksumObject()Ljava/util/
> zip/Checksum;
> 2017-02-08 16:34:28,319 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Diagnostics report from attempt_1486537920487_0014_r_000000_0: Error:
> org.apache.hadoop.hbase.util.ChecksumType.getChecksumObject()Ljava/util/
> zip/Checksum;
> 2017-02-08 16:34:28,320 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> removed attempt attempt_1486537920487_0014_r_000000_0 from the futures to
> keep track of
> 2017-02-08 16:34:28,324 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics
> report from attempt_1486537920487_0014_r_000000_0: Error:
> org.apache.hadoop.hbase.util.ChecksumType.getChecksumObject()Ljava/util/
> zip/Checksum;
> 2017-02-08 16:34:28,327 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_r_000000_0 TaskAttempt Transitioned from
> RUNNING
> to FAIL_CONTAINER_CLEANUP
> 2017-02-08 16:34:28,327 INFO [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container
> container_1486537920487_0014_01_000001 taskAttempt
> attempt_1486537920487_0014_r_000000_0
> 2017-02-08 16:34:28,330 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_r_000000_0 TaskAttempt Transitioned from
> FAIL_CONTAINER_CLEANUP to FAIL_TASK_CLEANUP
> 2017-02-08 16:34:28,331 INFO [CommitterEvent Processor #1]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> the event EventType: TASK_ABORT
> 2017-02-08 16:34:28,358 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1486537920487_0014_r_000000_0 TaskAttempt Transitioned from
> FAIL_TASK_CLEANUP to FAILED
> 2017-02-08 16:34:28,361 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1486537920487_0014_r_000000 Task Transitioned from RUNNING to FAILED
> 2017-02-08 16:34:28,361 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks:
> 10
> 2017-02-08 16:34:28,362 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks
> failed. failedMaps:0 failedReduces:1
> 2017-02-08 16:34:28,365 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1486537920487_0014Job Transitioned from RUNNING to FAIL_ABORT
> 2017-02-08 16:34:28,371 INFO [CommitterEvent Processor #2]
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
> Processing
> the event EventType: JOB_ABORT
> 2017-02-08 16:34:28,396 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
> job_1486537920487_0014Job Transitioned from FAIL_ABORT to FAILED
> 2017-02-08 16:34:28,398 INFO [Thread-79]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> We are finishing cleanly so this is the last retry
> 2017-02-08 16:34:28,398 INFO [Thread-79]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Notify RMCommunicator isAMLastRetry: true
> 2017-02-08 16:34:28,398 INFO [Thread-79] org.apache.hadoop.mapreduce.
> v2.app.rm.RMContainerAllocator: RMCommunicator notified that
> shouldUnregistered is: true
> 2017-02-08 16:34:28,398 INFO [Thread-79]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Notify JHEH isAMLastRetry: true
> 2017-02-08 16:34:28,398 INFO [Thread-79] org.apache.hadoop.mapreduce.
> jobhistory.JobHistoryEventHandler: JobHistoryEventHandler notified that
> forceJobCompletion is true
> 2017-02-08 16:34:28,398 INFO [Thread-79]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Calling stop for all the services
> 2017-02-08 16:34:28,400 INFO [Thread-79] org.apache.hadoop.mapreduce.
> jobhistory.JobHistoryEventHandler: Stopping JobHistoryEventHandler. Size
> of
> the outstanding queue size is 0
> 2017-02-08 16:34:28,500 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> hdfs://cluster:8020/tmp/hadoop-yarn/staging/root/.
> staging/job_1486537920487_
> 0014/job_1486537920487_0014_1.jhist to hdfs://cluster:8020/tmp/
> hadoop-yarn/staging/history/done_intermediate/root/job_1486537920487_0014-
> 1486542851497-root-Kylin_HFile_Generator_kylin_sales_
> cube_Step-1486542868361-9-0-FAILED-default-1486542858638.jhist_tmp
> 2017-02-08 16:34:28,563 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> done location: hdfs://cluster:8020/tmp/hadoop-yarn/staging/history/
> done_intermediate/root/job_1486537920487_0014-1486542851497-root-Kylin_
> HFile_Generator_kylin_sales_cube_Step-1486542868361-9-0-
> FAILED-default-1486542858638.jhist_tmp
> 2017-02-08 16:34:28,576 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying
> hdfs://cluster:8020/tmp/hadoop-yarn/staging/root/.
> staging/job_1486537920487_
> 0014/job_1486537920487_0014_1_conf.xml to hdfs://cluster:8020/tmp/
> hadoop-yarn/staging/history/done_intermediate/root/job_
> 1486537920487_0014_conf.xml_tmp
> 2017-02-08 16:34:28,636 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to
> done location: hdfs://cluster:8020/tmp/hadoop-yarn/staging/history/
> done_intermediate/root/job_1486537920487_0014_conf.xml_tmp
> 2017-02-08 16:34:28,665 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp
> to
> done: hdfs://cluster:8020/tmp/hadoop-yarn/staging/history/
> done_intermediate/root/job_1486537920487_0014.summary_tmp to
> hdfs://cluster:8020/tmp/hadoop-yarn/staging/history/
> done_intermediate/root/job_1486537920487_0014.summary
> 2017-02-08 16:34:28,675 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp
> to
> done: hdfs://cluster:8020/tmp/hadoop-yarn/staging/history/
> done_intermediate/root/job_1486537920487_0014_conf.xml_tmp to
> hdfs://cluster:8020/tmp/hadoop-yarn/staging/history/
> done_intermediate/root/job_1486537920487_0014_conf.xml
> 2017-02-08 16:34:28,686 INFO [eventHandlingThread]
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp
> to
> done: hdfs://cluster:8020/tmp/hadoop-yarn/staging/history/
> done_intermediate/root/job_1486537920487_0014-1486542851497-root-Kylin_
> HFile_Generator_kylin_sales_cube_Step-1486542868361-9-0-
> FAILED-default-1486542858638.jhist_tmp to hdfs://cluster:8020/tmp/
> hadoop-yarn/staging/history/done_intermediate/root/job_1486537920487_0014-
> 1486542851497-root-Kylin_HFile_Generator_kylin_sales_
> cube_Step-1486542868361-9-0-FAILED-default-1486542858638.jhist
> 2017-02-08 16:34:28,688 INFO [Thread-79] org.apache.hadoop.mapreduce.
> jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler.
> super.stop()
> 2017-02-08 16:34:28,691 ERROR [uber-EventHandler]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> Returning, interrupted : java.lang.InterruptedException
> 2017-02-08 16:34:28,693 INFO [Thread-79] org.apache.hadoop.mapreduce.
> v2.app.rm.RMContainerAllocator: Setting job diagnostics to Task failed
> task_1486537920487_0014_r_000000
> Job failed as tasks failed. failedMaps:0 failedReduces:1
>
> 2017-02-08 16:34:28,694 INFO [Thread-79] org.apache.hadoop.mapreduce.
> v2.app.rm.RMContainerAllocator: History url is http://hdmaster:19888/
> jobhistory/job/job_1486537920487_0014
> 2017-02-08 16:34:28,705 INFO [Thread-79] org.apache.hadoop.mapreduce.
> v2.app.rm.RMContainerAllocator: Waiting for application to be successfully
> unregistered.
> 2017-02-08 16:34:29,710 INFO [Thread-79]
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
> Deleting staging directory hdfs://cluster /tmp/hadoop-yarn/staging/root/
> .staging/job_1486537920487_0014
> 2017-02-08 16:34:29,726 INFO [Thread-79] org.apache.hadoop.ipc.Server:
> Stopping server on 52930
> 2017-02-08 16:34:29,734 INFO [IPC Server listener on 52930]
> org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 52930
> 2017-02-08 16:34:29,735 INFO [TaskHeartbeatHandler PingChecker]
> org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
> TaskHeartbeatHandler thread interrupted
> 2017-02-08 16:34:29,738 INFO [IPC Server Responder]
> org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
>

Mime
View raw message