kylin-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ShaoFeng Shi <shaofeng...@apache.org>
Subject Re: File does not exist error in kylin in step 3 extract fact table distinct columns
Date Sat, 16 Jun 2018 07:48:38 GMT
Hi Rahul,

I have no idea; Have you found the root cause?

2018-06-14 18:43 GMT+08:00 rahul middha <middha.rahul141@gmail.com>:

> While creating a cube in kylin i am getting error
>
> "java.io.FileNotFoundException: File does not exist:
> hdfs://localhost:9000/**/hive/lib/hive-catalog-core.jar not found" while
> file is there in the path, also when i remove that file from the path, the
> error come for some other jar file.
>
> my hadoop version is  hadoop2.7.3 ,hive2.3.3, hbase1.1.1, kylin2.3.1
>
>
>
>  The error is
>
> java.io.FileNotFoundException: File does not exist:
> hdfs://localhost:9000/home/dir/hive/lib/hive-hcatalog-core.jar
> at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(Distr
> ibutedFileSystem.java:1072)
> at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(Distr
> ibutedFileSystem.java:1064)
> at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSyst
> emLinkResolver.java:81)
> at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(D
> istributedFileSystem.java:1064)
> at org.apache.hadoop.mapreduce.filecache.ClientDistributedCache
> Manager.getFileStatus(ClientDistributedCacheManager.java:288)
> at org.apache.hadoop.mapreduce.filecache.ClientDistributedCache
> Manager.getFileStatus(ClientDistributedCacheManager.java:224)
> at org.apache.hadoop.mapreduce.filecache.ClientDistributedCache
> Manager.determineTimestamps(ClientDistributedCacheManager.java:99)
> at org.apache.hadoop.mapreduce.filecache.ClientDistributedCache
> Manager.determineTimestampsAndCacheVisibilities(ClientDistri
> butedCacheManager.java:57)
> at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFil
> es(JobSubmitter.java:265)
> at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFil
> es(JobSubmitter.java:301)
> at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(J
> obSubmitter.java:389)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGro
> upInformation.java:1614)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> at org.apache.kylin.engine.mr.common.AbstractHadoopJob.waitForCompletion(
> AbstractHadoopJob.java:175)
> at org.apache.kylin.storage.hbase.steps.CubeHFileJob.run(CubeHF
> ileJob.java:110)
> at org.apache.kylin.engine.mr.common.MapReduceExecutable.
> doWork(MapReduceExecutable.java:130)
> at org.apache.kylin.job.execution.AbstractExecutable.execute(Ab
> stractExecutable.java:162)
> at org.apache.kylin.job.execution.DefaultChainedExecutable.doWo
> rk(DefaultChainedExecutable.java:67)
> at org.apache.kylin.job.execution.AbstractExecutable.execute(Ab
> stractExecutable.java:162)
> at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRun
> ner.run(DefaultScheduler.java:300)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
> Executor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
> lExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>
>


-- 
Best regards,

Shaofeng Shi 史少锋

Mime
View raw message