hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From zhengjun chen <zhjchen...@gmail.com>
Subject Re: hadoop cluster install path
Date Thu, 07 Apr 2011 02:56:18 GMT
I did as you said. I built a small hadoop cluster. It consists of three
nodes. One is master, two slaves.
The master node is name node and jobtracker.
The other two nodes are data nodes and tasktracker. Different nodes installs
hadoop on different paths.

I run map/reduce program. The program can work well in single node mode.
However, When it run on the above cluster. It always complains the following
error:

11/04/05 10:23:26 INFO mapred.FileInputFormat: Total input paths to process
: 2
11/04/05 10:23:27 INFO mapred.JobClient: Running job: job_201104032315_0002
11/04/05 10:23:28 INFO mapred.JobClient:  map 0% reduce 0%
11/04/05 10:23:37 INFO mapred.JobClient:  map 66% reduce 0%
11/04/05 10:23:46 INFO mapred.JobClient:  map 66% reduce 22%
11/04/05 10:23:51 INFO mapred.JobClient: Task Id :
attempt_201104032315_0002_m_000002_0, Status : FAILED
"java.io.IOException: Could not obtain block: blk_-6124044027585573521_1094
file=/user/zhc209/cc_edge"

It is strange. I already put file cc_edge in HDFS before I run map/reduce
program (map/reduce program needs this file). I checked the two data nodes.
They work correctly. I tried classical wordcount example. It works on my
cluster.  A
ny suggestions are valuable. Thank you very much.

On Fri, Mar 4, 2011 at 12:54 PM, Harsh J <qwertymaniac@gmail.com> wrote:

> Hello,
>
> On Thu, Mar 3, 2011 at 8:05 PM, zhengjun chen <zhjchen.sa@gmail.com>
> wrote:
> > I tried to run hadoop on multi-node cluster. Each node installs hadoop on
> > different path. But not success
> >
> > Is it possible to run hadoop on multi-node cluster and these nodes
> install
> > hadoop on different paths?
> >
>
> You can start the slave services on the slave nodes manually (they
> will cluster automatically, if the configuration is valid across your
> heterogeneous setup).
>
> Use 'hadoop-daemon.sh {stop,start}
> {tasktracker,datanode,secondarynamenode}' to toggle individual daemons
> on target machines.
>
> [You may write a script to achieve an automation via this, perhaps --
> instead of letting the default slaves.sh being used via
> hadoop-daemons.sh]
>
> --
> Harsh J
> www.harshj.com
>



-- 
Best regards,
Zhengjun

Mime
View raw message