hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 张伟 <zhangwei.jus...@gmail.com>
Subject Re: Hive-0.13 java.io.FileNotFoundException: HIVE_PLAN not found
Date Tue, 17 Jun 2014 03:52:25 GMT
Hi Jason:

Thanks a lot for your tips!!

I finally find the problem. It's because i run shark-0.9.1 on the same
cluster which is compiled with hive-0.11. When yarn starts, It reads
hive-0.11 jar files which lead to the error!!

I've removed shark classpath from yarn.application.classpath in
yarn-site.xml, and the error fixed!

thank you


2014-06-17 7:04 GMT+08:00 Jason Dere <jdere@hortonworks.com>:

> Can you confirm you're using Hive 0.13? The stack trace looks more like it
> was on Hive 0.11.
> Is uberized mode enabled in YARN (mapreduce.job.ubertask.enable)? Could
> be due to HIVE-5857.
>
> On Jun 16, 2014, at 7:42 AM, 张伟 <zhangwei.justin@gmail.com> wrote:
>
> Hi,
>
>     I run Hadoop-2.2.0 + Hive-0.13.0 on a cluster. WordCount example
> succeeds running and it's ok to create table in hive cli. But when i run
> hive query with mapreduce jobs, then i keep getting errors like:
>
> Diagnostic Messages for this Task:Error: java.lang.RuntimeException: java.io.FileNotFoundException:
HIVE_PLAN7b8ea437-8ec3-4c05-af4e-3cd6466dce85 (No such file or directory)
>     at org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:230)
>     at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
>     at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:381)
>     at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:374)
>     at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:540)
>     at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<init>(MapTask.java:167)
>     at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:408)
>     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>     at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>     at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)Caused by: java.io.FileNotFoundException:
HIVE_PLAN7b8ea437-8ec3-4c05-af4e-3cd6466dce85 (No such file or directory)
>     at java.io.FileInputStream.open(Native Method)
>     at java.io.FileInputStream.<init>(FileInputStream.java:146)
>     at java.io.FileInputStream.<init>(FileInputStream.java:101)
>     at org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:221)
>     ... 12 more
>
>
> FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTaskMapReduce
Jobs Launched:Job 0: Map: 1   HDFS Read: 0 HDFS Write: 0 FAILTotal MapReduce CPU Time Spent:
0 msec
>
>
> According to the error messages above, i find that hive.exec.scratchdir (/tmp/hive-${user.name})
where hive plan files are stored is cleaned up, when query finished.
> But when i use hive-0.12 version, the directory was not cleaned, and all hive plan files
are kept there. I think it's the main problem causing the errors.
>
> how can I fix it? Looking forward to your reply!
>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

Mime
View raw message