hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vinod Kumar Vavilapalli <vino...@apache.org>
Subject Re: HowTo: Debugging/Running localmode on YARN ?
Date Tue, 19 Nov 2013 18:44:26 GMT

Local mapreduce job-runner was never supported/tested with HDFS though it could logically
work.

Thanks,
+Vinod


On Nov 19, 2013, at 4:58 AM, Remus Rusanu wrote:

> Hello all,
> 
> I just discovered that with 23 shims the localmode is driven by
> 
> SET  mapreduce.framework.name=local;
> 
> not by the traditional SET mapred.job.tracker=local; Has anyone put together a how-to
for debugging/running localmode on Yarn, like Thejas had for classic Hadoop at http://hadoop-pig-hive-thejas.blogspot.ie/2013/04/running-hive-in-local-mode.html
?
> 
> My specific issue is that on localmode I get error launching the job due to missing HDFS
file:
> 
> java.io.FileNotFoundException: File does not exist: hdfs://sandbox.hortonworks.com:8020/usr/lib/hcatalog/share/hcatalog/hcatalog-core.jar
>        at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
>        at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
>        at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
>        at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
>        at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
>        at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
>        at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
>        at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
>        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
>        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
>        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
>        at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
>        at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
>        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
>        at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:433)
>        at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:741)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> 
> Changing SET fs.default.name=file:///tmp; 'solves' the error, but I'm a bit confused
why using the (valid and running!) HDFS does not work. It seems to me that the HDFS resource
in question is just a concat of the default FS with a localpath, not a valid HDFS name...
> 
> Thanks,
> ~Remus


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Mime
View raw message