hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kirk Hunter <khun...@ptpnow.com>
Subject Wrong FS Exception
Date Mon, 04 May 2009 20:54:01 GMT

Can someone tell me how to resolve the following error message found in the
job tracker log file when trying to start map reduce.

grep FATAL *
hadoop-hadoop-jobtracker-hadoop-1.log:2009-05-04 16:35:14,176 FATAL
org.apache.hadoop.mapred.JobTracker: java.lang.IllegalArgumentException:
Wrong FS: hdfs://usr/local/hadoop-datastore/hadoop-hadoop/mapred/system,
expected: hdfs://localhost:54310

Here is my hadoop-site.xml as well


<description>A base for other temporary directories.</description>
<property> <!--OH: this is to solve HADOOP-1212 bug that causes
"Incompatible na
mespaceIDs" in datanode log -->
<!-- if incompatible problem persists, %rm -r
-hadoop/dfs/data from problematic datanode and reformat namenode -->
<description> The name of the default file system> A URI whose scheme and
ity determines the File System implementation> The uri's scheme determines
the config
 property (fs.SCHEME.impl) naming the File System implementation class.

The uri's authority is used to determine the host, port, etc. For a

<description> The host and port that the MapREduce job tracker runs at.  If
then jobs are run in-process as a single map and reduce task. </description>

View this message in context: http://www.nabble.com/Wrong-FS-Exception-tp23376486p23376486.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

View raw message