hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kevin Burton" <rkevinbur...@charter.net>
Subject RE: Can't initialize cluster
Date Tue, 30 Apr 2013 18:06:46 GMT
We/I are/am making progress. Now I get the error:


13/04/30 12:59:40 WARN mapred.JobClient: Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the same.

13/04/30 12:59:40 INFO mapred.JobClient: Cleaning up the staging area

13/04/30 12:59:40 ERROR security.UserGroupInformation:
PriviledgedActionException as:kevin (auth:SIMPLE)
cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input
path does not exist: hdfs://devubuntu05:9000/user/kevin/input

Exception in thread "main"
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does
not exist: hdfs://devubuntu05:9000/user/kevin/input


When I run it with java -jar the input and output is the local folder. When
running it with hadoop jar it seems to be expecting the folders (input and
output) to be on the HDFS file system. I am not sure why these two methods
of invocation don't make the same file system assumptions.


It is


hadoop jar WordCount.jar input output (which gives the above exception)




java -jar WordCount.jar input output (which outputs the wordcount statistics
to the output folder)


This is run in the local /home/kevin/WordCount folder.




From: Mohammad Tariq [mailto:dontariq@gmail.com] 
Sent: Tuesday, April 30, 2013 12:33 PM
To: user@hadoop.apache.org
Subject: Re: Can't initialize cluster


Set "HADOOP_MAPRED_HOME" in your hadoop-env.sh file and re-run the job. See
if it helps.

Warm Regards,





On Tue, Apr 30, 2013 at 10:10 PM, Kevin Burton <rkevinburton@charter.net>

To be clear when this code is run with 'java -jar' it runs without
exception. The exception occurs when I run with 'hadoop jar'.


From: Kevin Burton [mailto:rkevinburton@charter.net] 
Sent: Tuesday, April 30, 2013 11:36 AM
To: user@hadoop.apache.org
Subject: Can't initialize cluster


I have a simple MapReduce job that I am trying to get to run on my cluster.
When I run it I get:


13/04/30 11:27:45 INFO mapreduce.Cluster: Failed to use
org.apache.hadoop.mapred.LocalClientProtocolProvider due to error: Invalid
"mapreduce.jobtracker.address" configuration value for LocalJobRunner :

13/04/30 11:27:45 ERROR security.UserGroupInformation:
PriviledgedActionException as:kevin (auth:SIMPLE) cause:java.io.IOException:
Cannot initialize Cluster. Please check your configuration for
mapreduce.framework.name and the correspond server addresses.

Exception in thread "main" java.io.IOException: Cannot initialize Cluster.
Please check your configuration for mapreduce.framework.name and the
correspond server addresses.


My core-site.xml looks like:





  <description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>



So I am unclear as to why it is looking at devubuntu05:9001?


Here is the code:


    public static void WordCount( String[] args )  throws Exception {

        Configuration conf = new Configuration();

        String[] otherArgs = new GenericOptionsParser(conf,

        if (otherArgs.length != 2) {

            System.err.println("Usage: wordcount <in> <out>");



        Job job = new Job(conf, "word count");







org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(job, new

new Path(otherArgs[1]));

        System.exit(job.waitForCompletion(true) ? 0 : 1);




View raw message