hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Esteban Gutierrez Moguel <esteban...@gmail.com>
Subject Re: Hadoop example
Date Tue, 04 Jan 2011 09:21:17 GMT
Hi,

Seems that you need to add your hostname/IP pair in /etc/hosts in both
nodes. Also it looks that you need to setup your configuration files
correctly.

This guides can be helpful for you:

http://hadoop.apache.org/common/docs/r0.20.2/quickstart.html
http://hadoop.apache.org/common/docs/r0.20.2/cluster_setup.html

cheers,
esteban.


On Tue, Jan 4, 2011 at 02:38, haiyan <ultramatrixster@gmail.com> wrote:

> I have two nodes as Hadoop test. When I set fs.default.name to
> hdfs://hostname:54310/ in core-site.xml and mapred.job.tracker to
> hdfs://hostname:54311 in mapred-site.xml,
> I received the following error information while I started it by
> start-all.sh.
>
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /home/hadoop/tmp/mapred/system/jobtracker.info could only be replicated to
> 0
> nodes, instead of 1
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>        at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>        at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> ...
> Then I had to change hdfs://hostname:54310/ to hdfs://ipAddress:54310/ and
> hdfs://hostname:54311 to hdfs://ipAddress:54311, it's ok while I started it
> by start-all.sh.
> However, when I run wordcount example, I got the following error message.
>
> java.lang.IllegalArgumentException: Wrong FS:
>
> hdfs://ipAddress:54310/home/hadoop/tmp/mapred/system/job_201101041628_0005/job.xml,
> expected: hdfs://hostname:54310
>        at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
>        at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:99)
>        at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:155)
>        at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:453)
>        at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:745)
>        at
> org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:1664)
>        at
> org.apache.hadoop.mapred.TaskTracker.access$1200(TaskTracker.java:97)
>        at
>
> org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:1629)
>
>  From message above, it seems the hdfs://hostname:port is not suitable for
> example run? What should I do ?
>
> Note: ipAddress means ip address I used, hostname means host name  I used
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message