hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Abdelrhman Shettia <>
Subject Re: No such file or directory error on simple query
Date Tue, 05 Mar 2013 00:31:04 GMT
Hi Stephan , 

Please use the following  desc extended to see where is the table' s directory on hdfs. Here
is an example. 

hive -e "desc extended hcatsmokeid0b0abc02_date252113 ;"
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter
in all the files.
Logging initialized using configuration in jar:file:/usr/lib/hive/lib/hive-common-!/
Hive history file=/tmp/root/hive_job_log_root_201301211932_612003297.txt
id	int	
name	string	
Detailed Table Information	Table(tableName:hcatsmokeid0b0abc02_date252113, dbName:default,
owner:ambari_qa, createTime:1358814367, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:id,
type:int, comment:null), FieldSchema(name:name, type:string, comment:null)], location:hdfs://ambari1:8020/apps/hive/warehouse/hcatsmokeid0b0abc02_date252113,,,
compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe,
parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[],
skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[],
parameters:{transient_lastDdlTime=1358814367}, viewOriginalText:null, viewExpandedText:null,
Time taken: 2.965 seconds

Hope this helps. 

Hortonworks, Inc.
Technical Support Engineer
Abdelrahman Shettia
Office phone: (708) 689-9609
How am I doing?   Please feel free to provide feedback to my manager Rick Morris at

On Mar 2, 2013, at 1:59 AM, Stephen Boesch <> wrote:

> I am struggling with a "no such file or directory exception " when running a simple query
in hive.   It is unfortunate that the actual path  were not included with the stacktrace:
the following is all that is provided.
> I have a query that fails with the following error when done as   hive -e "select * from
<table>'". But it works properly when done within the hive shell.  But at the same time,
doing hive> select * from <table2>;" fails with the same error message.
> I am also seeing this error both for hdfs files and for s3 files.  Without any path information
it is  very difficult and time consuming to track this down.
> Any pointers appreciated.
> Automatically selecting local only mode for query
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter
in all the files.
> Execution log at: /tmp/impala/impala_20130302095252_79ce9404-6af7-405b-8b06-849fe6c5328d.log
> ENOENT: No such file or directory
> 	at Method)
> 	at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(
> 	at org.apache.hadoop.fs.FilterFileSystem.setPermission(
> 	at org.apache.hadoop.fs.FileSystem.mkdirs(
> 	at org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(
> 	at org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(
> 	at org.apache.hadoop.mapred.JobClient.access$400(
> 	at org.apache.hadoop.mapred.JobClient$
> 	at org.apache.hadoop.mapred.JobClient$
> 	at Method)
> 	at
> 	at
> 	at org.apache.hadoop.mapred.JobClient.submitJobInternal(
> 	at org.apache.hadoop.mapred.JobClient.submitJob(
> 	at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(
> 	at org.apache.hadoop.hive.ql.exec.ExecDriver.main(
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> 	at java.lang.reflect.Method.invoke(
> 	at org.apache.hadoop.util.RunJar.main(
> Job Submission failed with exception '
such file or directory)'
> Execution failed with exit status: 1
> Obtaining error information

View raw message