hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Arun C Murthy <...@yahoo-inc.com>
Subject Re: Jobtracker config?
Date Mon, 29 Sep 2008 22:37:53 GMT

On Sep 29, 2008, at 2:52 PM, Saptarshi Guha wrote:
> Setup:
> I am running the namenode on A, the sec. namenode on B and the  
> jobtracker on C. The datanodes and tasktrackers are on Z1,Z2,Z3.
>
> Problem:
> However, the jobtracker is starting up on A. Here are my configs for  
> Jobtracker

This would happen if you ran 'start-all.sh' on A rather than start- 
dfs.sh on A and start-mapred.sh on B. Is that what you did?

If not, please post the commands you used to start the HDFS and Map- 
Reduce clusters...

Arun

>
> <property>
>  <name>mapred.job.tracker</name>
>  <value>C:54311</value>
>  <description>The host and port that the MapReduce job tracker runs
>  at.  If "local", then jobs are run in-process as a single map
>  and reduce task.
>  </description>
> </property>
> <property>
>  <name>mapred.job.tracker.http.address</name>
>  <value>C:50030</value>
>  <description>
>    The job tracker http server address and port the server will  
> listen on.
>    If the port is 0 then the server will start on a free port.
>  </description>
> </property>
>
> Also, my masters contains on entry for B (so that the sec. name node  
> starts on B) and my slaves file contains Z1,Z2,Z3.
> The config files are synchronized across all machines.
>
> Any help would be appreciated.
> Thank you
> Saptarshi
>
> Saptarshi Guha | saptarshi.guha@gmail.com | http://www.stat.purdue.edu/~sguha
>
>
>


Mime
View raw message