hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From modemide <modem...@gmail.com>
Subject Re: Hadoop Distributed System Problems: Does not recognise any slave nodes
Date Thu, 24 Mar 2011 14:50:37 GMT
I'm also new to hadoop, but I was able to get my cluster up and
running.  I'm not familiar with Nutch though.

In any case, my assumption is that Nutch relies on a working hadoop
cluster as the base and adds on a few configurations to integrate the

Here are some things that might help you:
* Have you edited your slaves file to include the slave computer and
the master file to include the jobtracker?
* I also noticed that you are using open JDK for Java instead of sun
java.  I went with the Hadoop recommended Java distribution.  Is there
any particular reason for using Open JDK?
* I'll assume that because you said that files are replicated on every
computer, you only have two computers operating as slaves?
* Do you have the configuration for your slaves done?  Can you attach
those files?  (the attachments worked perfectly for me, I can't visit
the paste sites at work unfortunately)

Hope that gets you started in the right direction.  Also, if it helps,
I went through these tutorials several times and found them much more
helpful.  Maybe it will also help you:

On Thu, Mar 24, 2011 at 9:39 AM, Harsh J <qwertymaniac@gmail.com> wrote:
> Hello,
> Thanks for attaching the log.
> On Thu, Mar 24, 2011 at 5:34 PM, Andy XUE <andyxueyuan@gmail.com> wrote:
>> and the log file with error
>> message (*hadoop-rui-jobtracker-ss2.log <http://db.tt/PPGhEaa>*) are linked.
> This is a case of
> http://wiki.apache.org/hadoop/FAQ#What_does_.22file_could_only_be_replicated_to_0_nodes.2C_instead_of_1.22_mean.3F
> --
> Harsh J
> http://harshj.com

View raw message