hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gopal V <>
Subject Re: Hive on Tez Error
Date Sat, 22 Nov 2014 02:02:09 GMT
On 11/21/14, 10:11 AM, peterm_second wrote:

> Caused by: Previous writer likely failed to write
> hdfs://*.
> Failing because I am unlikely to write too.
>       at
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(
>       at

> I am a noob when it comes to tez and hive, has anyone seen this problem
> ? Any ideas what might be causing it.

I'll take a shot at the problem.

Somewhere in your configuration, there's the equivalent of a "*" in the 
AUX jars or in an "ADD FILE" or in the tez.lib.uris, which we're 
interpreting as-is instead of as a glob.

> As far as I understand tez doesn't use the distributed cashe so it tries
> to localize all the resources for each session. I have no idea what
> process is holding those files and causing it to fail with this error.
> I am using Hive 0.14.0 and tez 0.6

Actually, Tez tries to use distributed cache heavily (what's failing is 
the load into dist-cache location) - we now have multiple levels of the 
same concept, with re-localization in play during a session, if you do 
an "ADD JAR".

This indirectly shows up when someone queries an Accumulo table or HBase 
table immediately after a regular native ORC table query.

But unlike MR, it doesn't have a job.jar equivalent uploaded during job 
submission. So if there is something like an invalid HDFS path (the 
ending of "/*") it will fail when the session is spinning up.


View raw message