hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Keith Stevens <fozzietheb...@gmail.com>
Subject Setting up MapReduce 2 on a test cluster
Date Mon, 12 Mar 2012 01:18:53 GMT
Hi All,

I've been trying to setup Cloudera's Ch4 Beta 1 release of MapReduce 2.0 on
a small cluster for testing but i'm not having much luck getting things
running.  I've been following the guides on
configure everything.  hdfs seems to be working properly in that I can
access the file system, load files, and read them.

However, running jobs doesn't seem to work correctly.  I'm trying to run
just a sample job with

hadoop jar
randomwriter -Dmapreduce.job.user.name=$USER -
-Dmapreduce.randomwriter.bytespermap=10000 -Ddfs.blocksize=536870912
-Ddfs.block.size=536870912 -libjars

When running I get a ClassNotFoundException:
org.apache.hadoop.hdfs.DistributedFileSystem exception on the local node
running the task.  I have fs.hdfs.impl set to be
org.apache.hadoop.hdfs.DistributedFileSystem which i believe is to be
correct.  But i'm not sure why the node isn't finding the class.

In my setup, everything is located under /usr/local/hadoop on all the nodes
and all the relevant environment variables point to that directly.  So when
the local nodes start up they include this:


which looks to be correct.  So I'm not exactly sure where the problem is
coming from.

Any suggestions on what might be wrong or how to further diagnose the
problem would be greatly appreciated.


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message