hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Abin Shahab (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HADOOP-9883) Local mode FileNotFoundException: File does not exist
Date Tue, 20 Aug 2013 06:12:52 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-9883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Abin Shahab updated HADOOP-9883:
--------------------------------

    Attachment: HADOOP-9883.patch
    
> Local mode FileNotFoundException: File does not exist
> -----------------------------------------------------
>
>                 Key: HADOOP-9883
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9883
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 2.0.2-alpha
>         Environment: Centos 6.3
> Hadoop 2.0.2-alpha
> Hive 0.10.0
>            Reporter: Abin Shahab
>            Priority: Critical
>         Attachments: HADOOP-9883.patch, HADOOP-9883.patch
>
>
> Hive jobs in local mode fail with the error posted below. The jar file that's not being
found exists and has the following access:
> > ls -l hive-0.10.0/lib/hive-builtins-0.10.0.jar
> rw-rw-r-- 1 ashahab ashahab 3914 Dec 18 2012 hive-0.10.0/lib/hive-builtins-0.10.0.jar
> Steps to reproduce
> [vcc_chaiken@HadoopDesktop0 ~]$ hive
> Logging initialized using configuration in jar:file:/opt/hive/lib/hive-common-0.10.0.jar!/hive-log4j.properties
> Hive history file=/disk1/hive/log/vcc_chaiken/hive_job_log_vcc_chaiken_201307162119_876702406.txt
> hive> create database chaiken_test_00;
> OK
> Time taken: 1.675 seconds
> hive> use chaiken_test_00;
> OK
> Time taken: 0.029 seconds
> hive> create table chaiken_test_table(foo INT);
> OK
> Time taken: 0.301 seconds
> hive> select count(*) from chaiken_test_table;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> Starting Job = job_1373902166027_0061, Tracking URL = http://100-01-09.sc1.verticloud.com:8088/proxy/application_1373902166027_0061/
> Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1373902166027_0061
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
> 2013-07-16 21:20:25,617 Stage-1 map = 0%,  reduce = 0%
> 2013-07-16 21:20:30,026 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.13 sec
> 2013-07-16 21:20:31,110 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.13 sec
> 2013-07-16 21:20:32,188 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.13 sec
> 2013-07-16 21:20:33,270 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.13 sec
> 2013-07-16 21:20:34,356 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.13 sec
> 2013-07-16 21:20:35,455 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 3.4 sec
> MapReduce Total cumulative CPU time: 3 seconds 400 msec
> Ended Job = job_1373902166027_0061
> MapReduce Jobs Launched: 
> Job 0: Map: 1  Reduce: 1   Cumulative CPU: 3.4 sec   HDFS Read: 246 HDFS Write: 2 SUCCESS
> Total MapReduce CPU Time Spent: 3 seconds 400 msec
> OK
> 0
> Time taken: 20.627 seconds
> hive> set hive.exec.mode.local.auto;
> hive.exec.mode.local.auto=false
> hive> set hive.exec.mode.local.auto=true;
> hive> set hive.exec.mode.local.auto;     
> hive.exec.mode.local.auto=true
> hive> select count(*) from chaiken_test_table;
> Automatically selecting local only mode for query
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> 13/07/16 21:20:49 WARN conf.Configuration: file:/disk1/hive/scratch/vcc_chaiken/hive_2013-07-16_21-20-47_210_4351529322776236119/-local-10002/jobconf.xml:an
attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 13/07/16 21:20:49 WARN conf.Configuration: file:/disk1/hive/scratch/vcc_chaiken/hive_2013-07-16_21-20-47_210_4351529322776236119/-local-10002/jobconf.xml:an
attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter
in all the log4j.properties files.
> Execution log at: /tmp/vcc_chaiken/vcc_chaiken_20130716212020_4db219e0-cf40-4e73-ac0d-a1d2eaca934e.log
> java.io.FileNotFoundException: File does not exist: /opt/hive/lib/hive-builtins-0.10.0.jar
> at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:782)
> at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:208)
> at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:71)
> at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:252)
> at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:290)
> at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:361)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1215)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:617)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:612)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:612)
> at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:447)
> at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:689)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
> Job Submission failed with exception 'java.io.FileNotFoundException(File does not exist:
/opt/hive/lib/hive-builtins-0.10.0.jar)'
> Execution failed with exit status: 1
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message