hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Khaled BEN BAHRI <Khaled.Ben_ba...@it-sudparis.eu>
Subject Re: {Disarmed} Re: default masters/slaves content question
Date Fri, 25 Jun 2010 10:25:45 GMT
Hi :)

The file masters in the hadoop default installation contains localhost  
because in this default installation hadoop work in local mode in one  

localhost in master's file mean that the namenode and the jobtracker  
wil run on the local node

i hope i reply to your question


Quoting Vitaliy Semochkin <vitaliy.se@gmail.com>:

> Thank you very much for reply.
> Is there any reason masters file contains localhost in default installation
> of hadoop?
> On Thu, Jun 24, 2010 at 10:56 PM, Jitendra Nath Pandey <
> jitendra@yahoo-inc.com> wrote:
>>  Start-dfs.sh script will try to start the secondary namenode on the node
>> listed in masters file.
>> Datanodes don’t connect to the secondary namenode, and slaves file should
>> contain the hostnames where you want to start the datanode. It can be
>> localhost.
>> On 6/24/10 2:56 AM, "Vitaliy Semochkin" <vitaliy.se@gmail.com> wrote:
>> Hi,
>> In default installation hadoop masters and slaves files contain localhost.
>> Am I correct that masters file contains  list of SECONDARY namenodes?
>> If so, will localhost node try to start secondary namenode even if it
>> already have one?
>> More over will datanodes try to contact itself in order to try to reach
>>  secondary namenode on localhost?
>> should I keep localhost in slaves in case I want to run datanode on same
>> sever I start cluster and run namemode?
>> This is my first hadoop experience,
>> Thanks in advance.
>> Vitaliy S

View raw message