hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From phonechen <phonec...@gmail.com>
Subject Re: Masters and Slaves in Slaves machines
Date Tue, 27 May 2008 01:47:40 GMT
1. config the namenode and jobtracker node in  hadoop-site.xml :

<property>
  <name>fs.default.name</name>
  <value>your-name-node:9527</value>
</property>

<property>
  <name>mapred.job.tracker</name>
  <value>your-job-tracker-node :9528</value>
</property>
2. config all the datanode and tasktracker node by the "slaves"   file,
node1
 node2
 node3
....


3."masters" is for secondary namenode,so if you just want to run the hadoop
,my suggestion is remove the content and
 leave it as an empty  file.





On 5/27/08, Allan AvendaƱo <aavendan@fiec.espol.edu.ec> wrote:
>
> Regards for all!
>
> I'm configuring a HDFS with multi-nodes, I've some problems with slaves
> machines, specifically configuring these archives:
>
>   masters
>   slaves
>   hadoop-site.xml
>
>
> I don't have idea which ip address I should have to write in "masters"
> archive in slaves machines...
>
> localhost or "my master machine".
>
> And so, with others archives.
>
> Thanks for ur help,
>
> --
> --------
>
> Allan Roberto AvendaƱo Sudario
> Guayaquil-Ecuador
>
>


-- 
--~--~---------~--~----~------------~-------~--

Best Regards,

Yours
Phonechen

-~----------~----~----~----~------~----~------

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message