Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 4CCEB200CB6 for ; Thu, 29 Jun 2017 08:32:32 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 4B3E7160BDF; Thu, 29 Jun 2017 06:32:32 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id EB04D160BED for ; Thu, 29 Jun 2017 08:32:30 +0200 (CEST) Received: (qmail 12109 invoked by uid 500); 29 Jun 2017 06:32:29 -0000 Mailing-List: contact user-help@kylin.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@kylin.apache.org Delivered-To: mailing list user@kylin.apache.org Received: (qmail 12088 invoked by uid 99); 29 Jun 2017 06:32:29 -0000 Received: from mail-relay.apache.org (HELO mail-relay.apache.org) (140.211.11.15) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 29 Jun 2017 06:32:29 +0000 Received: from mail-io0-f175.google.com (mail-io0-f175.google.com [209.85.223.175]) by mail-relay.apache.org (ASF Mail Server at mail-relay.apache.org) with ESMTPSA id 2DCA81A0280; Thu, 29 Jun 2017 06:32:29 +0000 (UTC) Received: by mail-io0-f175.google.com with SMTP id r36so1752858ioi.1; Wed, 28 Jun 2017 23:32:29 -0700 (PDT) X-Gm-Message-State: AKS2vOy2w1LvH6+AzQbKRwbYcgwIiQO42vFAXQv1GdzB4Wv1CzwqYGM1 FMoTRxFyeq+inGACOPQLHsNtWg+elQ== X-Received: by 10.107.159.196 with SMTP id i187mr12889543ioe.55.1498717948439; Wed, 28 Jun 2017 23:32:28 -0700 (PDT) MIME-Version: 1.0 Received: by 10.107.176.206 with HTTP; Wed, 28 Jun 2017 23:32:27 -0700 (PDT) In-Reply-To: References: From: Li Yang Date: Thu, 29 Jun 2017 14:32:27 +0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: File not found error at step 2 in yarn logs To: user@kylin.apache.org Cc: dev@kylin.apache.org Content-Type: multipart/alternative; boundary="001a1140f78eef7d000553137621" archived-at: Thu, 29 Jun 2017 06:32:32 -0000 --001a1140f78eef7d000553137621 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Kylin sends metadata as distributed cache of MR job. The missing file "file:/home/q/hadoop/kylin/tomcat/temp/kylin_job_meta3892468167792432608/me= ta" should be prepared on machine B and D before YARN kicks off mappers. As to why the files were not there.... I don't know. On Wed, Jun 14, 2017 at 12:12 PM, Gavin_Chou wrote: > Hi, all: > I have a problem while building cube at step 2. > > The error appears in yarn log: > > 2017-06-14 11:21:08,793 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.application.Application: Application > application_1497364689294_0018 transitioned from NEW to INITING > 2017-06-14 11:21:08,793 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.application.Application: Adding > container_1497364689294_0018_01_000001 to application application_ > 1497364689294_0018 > 2017-06-14 11:21:08,793 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.application.Application: Application > application_1497364689294_0018 transitioned from INITING to RUNNING > 2017-06-14 11:21:08,794 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.container.Container: Container > container_1497364689294_0018_01_000001 transitioned from NEW to LOCALIZIN= G > 2017-06-14 11:21:08,794 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for > appId application_1497364689294_0018 > 2017-06-14 11:21:08,794 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.localizer.LocalizedResource: Resource > file:/tmp/hadoop-yarn/staging/hadoop/.staging/job_1497364689294_0018/job.= jar > transitioned from INIT to DOWNLOADING > 2017-06-14 11:21:08,794 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.localizer.LocalizedResource: Resource > file:/tmp/hadoop-yarn/staging/hadoop/.staging/job_1497364689294_0018/job.= splitmetainfo > transitioned from INIT to DOWNLOADING > 2017-06-14 11:21:08,794 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.localizer.LocalizedResource: Resource > file:/tmp/hadoop-yarn/staging/hadoop/.staging/job_1497364689294_0018/job.= split > transitioned from INIT to DOWNLOADING > 2017-06-14 11:21:08,794 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.localizer.LocalizedResource: Resource > file:/tmp/hadoop-yarn/staging/hadoop/.staging/job_1497364689294_0018/job.= xml > transitioned from INIT to DOWNLOADING > 2017-06-14 11:21:08,794 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.localizer.LocalizedResource: > Resource file:/home/q/hadoop/kylin/tomcat/temp/kylin_job_meta389246816779= 2432608/meta > transitioned from INIT to DOWNLOADING > 2017-06-14 11:21:08,794 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.localizer.ResourceLocalizationService: > Created localizer for container_1497364689294_0018_01_000001 > 2017-06-14 11:21:08,794 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.localizer.ResourceLocalizationService: > Downloading public rsrc:{ file:/home/q/hadoop/kylin/tomcat/temp/kylin_job= _meta3892468167792432608/meta, > 1497410467000, FILE, null } > 2017-06-14 11:21:08,796 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.localizer.ResourceLocalizationService: > Writing credentials to the nmPrivate file /home/q/hadoop/hadoop/ > tmp/nm-local-dir/nmPrivate/container_1497364689294_0018_01_000001.tokens. > Credentials list: > 2017-06-14 11:21:08,796 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.localizer.ResourceLocalizationService: > Failed to download rsrc { { file:/home/q/hadoop/kylin/ > tomcat/temp/kylin_job_meta3892468167792432608/meta, 1497410467000, FILE, > null },pending,[(container_1497364689294_0018_01_000001)] > ,781495827608056,DOWNLOADING} > java.io.FileNotFoundException: File file:/home/q/hadoop/kylin/ > tomcat/temp/kylin_job_meta3892468167792432608/meta does not exist > at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus( > RawLocalFileSystem.java:524) > at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal( > RawLocalFileSystem.java:737) > at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus( > RawLocalFileSystem.java:514) > at org.apache.hadoop.fs.FilterFileSystem.getFileStatus( > FilterFileSystem.java:397) > at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:250) > at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:353) > at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:59) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471= ) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.ThreadPoolExecutor.runWorker( > ThreadPoolExecutor.java:1145) > at java.util.concurrent.ThreadPoolExecutor$Worker.run( > ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > 2017-06-14 11:21:08,796 INFO org.apache.hadoop.yarn.server.nodemanager.De= faultContainerExecutor: > Initializing user hadoop > 2017-06-14 11:21:08,797 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.localizer.LocalizedResource: > Resource file:/home/q/hadoop/kylin/tomcat/temp/kylin_job_ > meta3892468167792432608/meta(->/home/q/hadoop/hadoop/tmp/nm-local-dir/fil= ecache/18/meta) > transitioned from DOWNLOADING to FAILED > 2017-06-14 11:21:08,797 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.container.Container: Container > container_1497364689294_0018_01_000001 transitioned from LOCALIZING to > LOCALIZATION_FAILED > 2017-06-14 11:21:08,797 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.localizer.LocalResourcesTrackerImpl: > Container container_1497364689294_0018_01_000001 sent RELEASE event on a > resource request { file:/home/q/hadoop/kylin/tomcat/temp/kylin_job_meta38= 92468167792432608/meta, > 1497410467000, FILE, null } not present in cache. > 2017-06-14 11:21:08,797 WARN org.apache.hadoop.yarn.server.nodemanager.NM= AuditLogger: > USER=3Dhadoop OPERATION=3DContainer Finished - Failed TARGET=3DContainerI= mpl > RESULT=3DFAILURE DESCRIPTION=3DContainer failed with state: > LOCALIZATION_FAILED APPID=3Dapplication_1497364689294_0018 > CONTAINERID=3Dcontainer_1497364689294_0018_01_000001 > 2017-06-14 11:21:08,797 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.container.Container: Container > container_1497364689294_0018_01_000001 transitioned > from LOCALIZATION_FAILED to DONE > > This error appears in yarn-nodemanager log of machine B and D. And before > it I found a warning log in yarn-nodemanager log in machine C (Kylin is > only installed in machine A): > > 2017-06-14 11:21:01,131 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.container.Container: Container > container_1497364689294_0017_01_000002 transitioned from LOCALIZING to > LOCALIZED > 2017-06-14 11:21:01,146 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.container.Container: Container > container_1497364689294_0017_01_000002 transitioned from LOCALIZED to > RUNNING > 2017-06-14 11:21:01,146 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.monitor.ContainersMonitorImpl: Neither > virutal-memory nor physical-memory monitoring is needed. Not running the > monitor-thread > 2017-06-14 11:21:01,149 INFO org.apache.hadoop.yarn.server.nodemanager.De= faultContainerExecutor: > launchContainer: [nice, -n, 0, bash, /home/q/hadoop/hadoop/tmp/nm- > local-dir/usercache/hadoop/appcache/application_ > 1497364689294_0017/container_1497364689294_0017_01_000002/ > default_container_executor.sh] > 2017-06-14 11:21:05,024 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.ContainerManagerImpl: Stopping container > with container Id: container_1497364689294_0017_01_000002 > 2017-06-14 11:21:05,025 INFO org.apache.hadoop.yarn.server.nodemanager.NM= AuditLogger: > USER=3Dhadoop IP=3D10.90.181.160 OPERATION=3DStop Container Request > TARGET=3DContainerManageImpl RESULT=3DSUCCESS APPID=3Dapplication_ > 1497364689294_0017 CONTAINERID=3Dcontainer_1497364689294_0017_01_000002 > 2017-06-14 11:21:05,025 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.container.Container: Container > container_1497364689294_0017_01_000002 transitioned from RUNNING to > KILLING > 2017-06-14 11:21:05,025 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up > container container_1497364689294_0017_01_000002 > 2017-06-14 11:21:05,028 WARN org.apache.hadoop.yarn.server.nodemanager.De= faultContainerExecutor: > Exit code from container container_1497364689294_0017_01_000002 is : 143 > 2017-06-14 11:21:05,040 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.container.Container: Container > container_1497364689294_0017_01_000002 transitioned from KILLING to > CONTAINER_CLEANEDUP_AFTER_KILL > 2017-06-14 11:21:05,041 INFO org.apache.hadoop.yarn.server.nodemanager.NM= AuditLogger: > USER=3Dhadoop OPERATION=3DContainer Finished - Killed TARGET=3DContainerI= mpl > RESULT=3DSUCCESS APPID=3Dapplication_1497364689294_0017 CONTAINERID=3Dcon= tainer_ > 1497364689294_0017_01_000002 > 2017-06-14 11:21:05,041 INFO org.apache.hadoop.yarn.server. > nodemanager.containermanager.container.Container: Container > container_1497364689294_0017_01_000002 transitioned > from CONTAINER_CLEANEDUP_AFTER_KILL to DONE > > It puzzles me that why kylin wants to load a local file by applications o= n > other nodes in step 2? How can I solve it? > > Here are some additional information(They may be helpful for analyzing th= e > problem): > The cluster has 4 machines: A B C and D. > Hadoop version 2.5.0 support snappy > Namenode: A(stand by) B(active) > Datanode: all > Hive version 0.13.1 recompile for hadoop2 > HBase version 0.98.6 recompile for hadoop 2.5.0 > Master: A(active) and B > When I set =E2=80=9Chbase.rootdir=E2=80=9D in hbase-site.xml as detail IP= address of > active namenode, the step 2 is ok, but it will failed at the last 5 step. > So I change the setting item to cluster name. And there is no problem in > hbase logs. > > Thank you > > Best regards > > > > > --001a1140f78eef7d000553137621 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Kylin sends metadata as distributed cache of MR job. = The missing file "file:/home/q/hadoop/kylin/tomcat/temp/kylin_job= _meta3892468167792432608/meta" should be prepared on machine B an= d D before YARN kicks off mappers.

As to why the files were no= t there.... I don't know.

On Wed, Jun 14, 2017 at 12:12 PM, Gavin_Chou <zh= ou.guo.qiao@163.com> wrote:
Hi, all= :
I have a problem while building cube at step 2.
=

The error appears in= yarn log:=C2=A0

2017-06-14 11:21:08,793 INFO org.apache.hado= op.yarn.server.nodemanager.containermanager.application.Applicati= on: Application application_1497364689294_0018 transitioned from NEW=C2=A0t= o INITING
2017-06-14 11:21:08,793 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding contain= er_1497364689294_0018_01_000001 to application=C2=A0application_1= 497364689294_0018
2017-06-14 11:21:08,793 INFO org.apache.hadoop.yarn.se= rver.nodemanager.containermanager.application.Application: Applic= ation application_1497364689294_0018 transitioned from=C2=A0INITING to RUNN= ING
2017-06-14 11:21:08,794 INFO org.apache.hadoop.yarn.server.node= manager.containermanager.container.Container: Container container_1497= 364689294_0018_01_000001 transitioned from NEW=C2=A0to LOCALIZING
2= 017-06-14 11:21:08,794 INFO org.apache.hadoop.yarn.server.nodemanager.= containermanager.AuxServices: Got event CONTAINER_INIT for appId appli= cation_1497364689294_0018
2017-06-14 11:21:08,794 INFO org.apache.hadoop= .yarn.server.nodemanager.containermanager.localizer.LocalizedReso= urce: Resource file:/tmp/hadoop-yarn/staging/hadoop/.staging/job_= 1497364689294_0018/job.jar transitioned from INIT to DOWNLOADING
2017-06= -14 11:21:08,794 INFO org.apache.hadoop.yarn.server.nodemanager.contai= nermanager.localizer.LocalizedResource: Resource file:/tmp/hadoop-yarn= /staging/hadoop/.staging/job_1497364689294_0018/job.splitmet= ainfo transitioned from INIT to DOWNLOADING
2017-06-14 11:21:08,794 INFO= org.apache.hadoop.yarn.server.nodemanager.containermanager.local= izer.LocalizedResource: Resource file:/tmp/hadoop-yarn/staging/hadoop/= .staging/job_1497364689294_0018/job.split transitioned from INIT to DO= WNLOADING
2017-06-14 11:21:08,794 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource f= ile:/tmp/hadoop-yarn/staging/hadoop/.staging/job_1497364689294_00= 18/job.xml transitioned from INIT to DOWNLOADING
2017-06-14 11:21:08,794= INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.= localizer.LocalizedResource: Resource=C2=A0file:/home/q/hadoop/kylin/t= omcat/temp/kylin_job_meta3892468167792432608/meta transitioned from IN= IT to DOWNLOADING
2017-06-14 11:21:08,794 INFO org.apache.hadoop.yarn.se= rver.nodemanager.containermanager.localizer.ResourceLocaliza= tionService: Created localizer for=C2=A0container_1497364689294_0018_0= 1_000001
2017-06-14 11:21:08,794 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationServi= ce: Downloading public rsrc:{=C2=A0file:/home/q/hadoop/kylin/tomcat/te= mp/kylin_job_meta3892468167792432608/meta, 1497410467000, FILE, null }=
2017-06-14 11:21:08,796 INFO org.apache.hadoop.yarn.server.nodeman= ager.containermanager.localizer.ResourceLocalizationService: Writ= ing credentials to the nmPrivate file=C2=A0/home/q/hadoop/hadoop/tmp/n= m-local-dir/nmPrivate/container_1497364689294_0018_01_000001.toke= ns. Credentials list:=C2=A0
2017-06-14 11:21:08,796 INFO org.apache.hado= op.yarn.server.nodemanager.containermanager.localizer.Resour= ceLocalizationService: Failed to download rsrc { {=C2=A0file:/home/q/hadoop= /kylin/tomcat/temp/kylin_job_meta3892468167792432608/meta, 149741= 0467000, FILE, null },pending,[(container_1497364689294_0018_01_000001= )],781495827608056,DOWNLOADING}
java.io.FileNotFoundException: File= file:/home/q/hadoop/kylin/tomcat/temp/kylin_job_meta389246816779= 2432608/meta does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.ja= va:524)
at org.apache.hadoop.fs.RawLocalFileSystem.= getFileLinkStatusInternal(RawLocalFileSystem.java:737)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus= (RawLocalFileSystem.java:514)
at org.apache.hadoop.= fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:397)=
at org.apache.hadoop.yarn.util.FSDownload.copy(FSD= ownload.java:250)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:353)
at org.apac= he.hadoop.yarn.util.FSDownload.call(FSDownload.java:59)
at java.util.concurrent.FutureTask.run(FutureTask.java:= 262)
at java.util.concurrent.Executors$RunnableAdap= ter.call(Executors.java:471)
at java.util.concurren= t.FutureTask.run(FutureTask.java:262)
at java.= util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j= ava:1145)
at java.util.concurrent.ThreadPoolExecuto= r$Worker.run(ThreadPoolExecutor.java:615)
at java.l= ang.Thread.run(Thread.java:744)
2017-06-14 11:21:08,796 INFO org.ap= ache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Ini= tializing user hadoop
2017-06-14 11:21:08,797 INFO org.apache.hadoop.yar= n.server.nodemanager.containermanager.localizer.LocalizedResource= : Resource=C2=A0file:/home/q/hadoop/kylin/tomcat/temp/kylin_job_m= eta3892468167792432608/meta(->/home/q/hadoop/hadoop/tmp/nm-loc= al-dir/filecache/18/meta) transitioned from=C2=A0DOWNLOADING to FAILED
2= 017-06-14 11:21:08,797 INFO org.apache.hadoop.yarn.server.nodemanager.= containermanager.container.Container: Container container_149736468929= 4_0018_01_000001 transitioned from=C2=A0LOCALIZING to LOCALIZATION_FAI= LED
2017-06-14 11:21:08,797 INFO org.apache.hadoop.yarn.server.node= manager.containermanager.localizer.LocalResourcesTrackerImpl: Con= tainer container_1497364689294_0018_01_000001 sent=C2=A0RELEASE event = on a resource request {=C2=A0file:/home/q/hadoop/kylin/tomcat/temp/kyl= in_job_meta3892468167792432608/meta, 1497410467000, FILE, null } not p= resent in cache.
2017-06-14 11:21:08,797 WARN org.apache.hadoop.yarn.ser= ver.nodemanager.NMAuditLogger: USER=3Dhadoop OPERATION= =3DContainer Finished - Failed TARGET=3DContainerImpl RESULT=3DFAILURE DESCRIPTION=3DContainer failed wit= h state: LOCALIZATION_FAILED APPID=3Dapplication_149736= 4689294_0018 CONTAINERID=3Dcontainer_1497364689294_0018= _01_000001
2017-06-14 11:21:08,797 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container contain= er_1497364689294_0018_01_000001 transitioned from=C2=A0LOCALIZATION_FA= ILED to DONE

This err= or appears in yarn-nodemanager log of machine B and D. And before it I foun= d a warning log in yarn-nodemanager log in machine C (Kylin is only install= ed in machine A):

2017-06-14 11:21:01,131 INFO org.apache.ha= doop.yarn.server.nodemanager.containermanager.container.Container= : Container container_1497364689294_0017_01_000002 transitioned from= =C2=A0LOCALIZING to LOCALIZED
2017-06-14 11:21:01,146 INFO org.apache.ha= doop.yarn.server.nodemanager.containermanager.container.Container= : Container container_1497364689294_0017_01_000002 transitioned from= =C2=A0LOCALIZED to RUNNING
2017-06-14 11:21:01,146 INFO org.apache.hadoo= p.yarn.server.nodemanager.containermanager.monitor.ContainersMoni= torImpl: Neither virutal-memory nor physical-memory monitoring is=C2=A0need= ed. Not running the monitor-thread
2017-06-14 11:21:01,149 INFO org.apac= he.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launc= hContainer: [nice, -n, 0, bash, /home/q/hadoop/hadoop/tmp/nm-local-dir= /usercache/hadoop/appcache/application_1497364689294_0017/contain= er_1497364689294_0017_01_000002/default_container_executor.sh]2017-06-14 11:21:05,024 INFO org.apache.hadoop.yarn.server.nodemanage= r.containermanager.ContainerManagerImpl: Stopping container with conta= iner Id:=C2=A0container_1497364689294_0017_01_000002
2017-06-14 11:= 21:05,025 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger= : USER=3Dhadoop IP=3D10.90.181.160 OPERATION= =3DStop Container Request TARGET=3DContainerManageImpl RESULT=3DSUCCESS APPID=3Dapplication_14973646= 89294_0017 CONTAINERID=3Dcontainer_1497364689294_0017_0= 1_000002
2017-06-14 11:21:05,025 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container= _1497364689294_0017_01_000002 transitioned from=C2=A0RUNNING to KILLIN= G
2017-06-14 11:21:05,025 INFO org.apache.hadoop.yarn.server.nodema= nager.containermanager.launcher.ContainerLaunch: Cleaning up container= container_1497364689294_0017_01_000002
2017-06-14 11:21:05,028 WAR= N org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecu= tor: Exit code from container container_1497364689294_0017_01_000002 i= s : 143
2017-06-14 11:21:05,040 INFO org.apache.hadoop.yarn.server.= nodemanager.containermanager.container.Container: Container container_= 1497364689294_0017_01_000002 transitioned from=C2=A0KILLING to CONTAIN= ER_CLEANEDUP_AFTER_KILL
2017-06-14 11:21:05,041 INFO org.apache.hadoop.y= arn.server.nodemanager.NMAuditLogger: USER=3Dhadoop OPE= RATION=3DContainer Finished - Killed TARGET=3DContainerImpl<= span class=3D"m_5985916572450453758Apple-tab-span" style=3D"white-space:pre= -wrap"> RESULT=3DSUCCESS APPID=3Dapplication_149= 7364689294_0017 CONTAINERID=3Dcontainer_1497364689294_0= 017_01_000002
2017-06-14 11:21:05,041 INFO org.apache.hadoop.yarn.server= .nodemanager.containermanager.container.Container: Container cont= ainer_1497364689294_0017_01_000002 transitioned from=C2=A0CONTAINER_CL= EANEDUP_AFTER_KILL to DONE

It puzzles me that why kylin wants to load a l= ocal file by applications on other nodes in step 2? How can I solve it?

= Here are some additional information(They may be helpful for analyz= ing the problem):
The cluster has 4 machines: A = B C and D.
Hadoop version 2.5.0 =C2=A0support sn= appy =C2=A0 =C2=A0 =C2=A0=C2=A0
=C2=A0 =C2=A0 = =C2=A0 Namenode: A(stand by) B(active)
=C2=A0 = =C2=A0 =C2=A0 Datanode: all
Hive version 0.13.1 = recompile for hadoop2
HBase version 0.98.6 recom= pile for hadoop 2.5.0
=C2=A0 =C2=A0 =C2=A0Maste= r: A(active) and B
When I set =E2=80=9Chbase.roo= tdir=E2=80=9D=C2=A0in hbase-site.xml as detail IP address of active namenod= e, the step 2 is ok, but it will failed at the last 5 step.
So I change the setting item to cluster name. And there is no p= roblem in hbase logs.

Thank you

Best regards =C2=A0





--001a1140f78eef7d000553137621--