hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anil Gupta <anilgupt...@gmail.com>
Subject Re: adding or restarting a data node in a hadoop cluster
Date Tue, 01 May 2012 05:10:04 GMT
@amit: if the DN is getting the IP from dhcp then the ip address might change after a reboot.

Dynamic ip's in the cluster are not a good choice. IMO

Best Regards,
Anil

On Apr 30, 2012, at 8:22 PM, Amith D K <amithdk@huawei.com> wrote:

> Hi sumadhur,
> 
> As u mentioned configureg the NN and JT ip would be enough.
> 
> I am not able to understand how on DN restart its IP get changed?
> 
> ________________________________________
> From: sumadhur [sumadhur_iitr@yahoo.com]
> Sent: Tuesday, May 01, 2012 10:58 AM
> To: common-user@hadoop.apache.org
> Subject: adding or restarting a data node in a hadoop cluster
> 
> I am on hadoop 0.20.
> 
> To add a data node to a cluster, if we do not use the include/exclude/slaves files, do
we need to  do anything other than configuring the hdfs-site.xml to point to name node and
the mapred-site.xml to point to job tracker?
> 
> For example, should the job tracker and name node be restarted always?
> 
> On a related note, if we restart a data node(that has some blocks on it) and the data
node now has new IP address, Should we restart namenode/job tracker for hdfs and map-reduce
to function correctly?
> Would the blocks on the restarted data node be detected or would hdfs think that these
blocks were lost and start replicating them?
> 
> Thanks,
> Sumadhur

Mime
View raw message