hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Varun Thacker <varunthacker1...@gmail.com>
Subject Re: some doubts
Date Fri, 05 Mar 2010 13:19:59 GMT
This is how my jar looks like.
Its name is Election.jar
Inside the jar these are the files
Manifest-Version: 1.0
Created-By: 1.6.0_0 (Sun Microsystems Inc.)

This is how i run it:
hadoop@varun:~/hadoop-0.20.1$ bin/hadoop jar Election.jar Election.class
gutenberg gutenberg-output

Exception in thread "main" java.lang.ClassNotFoundException: Election.class
    at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
    at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:247)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:149)

what am doing wrong?

On Fri, Mar 5, 2010 at 10:40 AM, Eric Sammer <eric@lifeless.net> wrote:

> On 3/4/10 11:42 PM, Varun Thacker wrote:
> > I am using ubuntu Linux. I was able to get the standalone hadoop cluster
> > running and run the wordcount example.
> > before i start writing hadoop programs i wanted to compile the wordcount
> > example on my own.
> > So this is what i did to make the jar file on my own.
> >
> > javac -classpath /home/varun/hadoop/hadoop-0.
> > 20.1/hadoop-0.20.1-core.jar WordCount.java
> > jar -cvf wordcount.jar -C /media/d/iproggys/java/Hadoop/src/wordcount/ .
> >
> > Is this the correct way to do it?
> That looks correct, yes. For anything more complicated than something
> like this, you'll want to use a build tool like Maven or Ant, though.
> > I had one more doubt while running the example.This is what i do to run
> the
> > mapreduce job.
> > bin/hadoop jar hadoop-0.20.1-examples.jar wordcount gutenberg
> > gutenberg-output
> >
> > what is wordcount?
> > gutenberg being the input dir.
> > gutenberg-output being the out dir.
> The 'hadoop jar' command requires the jar file. The next argument is
> usually the name of the class to run from the jar file. This is not
> required if the jar's manifest file specifies it, though (which the
> examples jar does). In this case, this is just a normal argument passed
> to the main method of the main class. Like you said, the other arguments
> are the input and output directories.
> If you unjar the examples and look at the manifest file, you'll see the
> line:
> Main-Class: org/apache/hadoop/examples/ExampleDriver
> This is the class that gets run for this jar file.
> If you run 'hadoop jar' with no other arguments, you'll see the usage
> statement:
> # $HADOOP_HOME/bin/hadoop jar
> RunJar jarFile [mainClass] args...
> If you run the example jar without the arguments, you'll see the example
> code usage:
> # $HADOOP_HOME/bin/hadoop jar hadoop-0.20.1+152-examples.jar
> An example program must be given as the first argument.
> Valid program names are:
>  aggregatewordcount: An Aggregate based map/reduce program that counts
> the words in the input files.
>  aggregatewordhist: An Aggregate based map/reduce program that computes
> the histogram of the words in the input files.
>  dbcount: An example job that count the pageview counts from a database.
>  grep: A map/reduce program that counts the matches of a regex in the
> input.
>  join: A job that effects a join over sorted, equally partitioned datasets
>  multifilewc: A job that counts words from several files.
>  pentomino: A map/reduce tile laying program to find solutions to
> pentomino problems.
>  pi: A map/reduce program that estimates Pi using monte-carlo method.
>  randomtextwriter: A map/reduce program that writes 10GB of random
> textual data per node.
>  randomwriter: A map/reduce program that writes 10GB of random data per
> node.
>  secondarysort: An example defining a secondary sort to the reduce.
>  sleep: A job that sleeps at each map and reduce task.
>  sort: A map/reduce program that sorts the data written by the random
> writer.
>  sudoku: A sudoku solver.
>  teragen: Generate data for the terasort
>  terasort: Run the terasort
>  teravalidate: Checking results of terasort
>  wordcount: A map/reduce program that counts the words in the input files.
> Note that 'wordcount' is one of the options.
> Hope this helps.
> --
> Eric Sammer
> eric@lifeless.net
> http://esammer.blogspot.com


Varun Thacker

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message