hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From chenfolin <chenfo...@jd.com>
Subject Re: Re: block replication
Date Wed, 01 Jan 2014 10:20:42 GMT
Hi,

Vishnu Viswanath:

10.30 minutes = 2 * (conf.getInt("dfs.namenode.heartbeat.recheck-interval" , 5*60*1000 ))
+ 10 * 1000 * conf.getInt("dfs.heartbeat.interval",3);

You must configure "dfs.heartbeat.interval" and "dfs.namenode.heartbeat.recheck-interval".


2014-01-01 



chenfolin 



发件人: Vishnu Viswanath 
发送时间: 2014-01-01  17:22:11 
收件人: user@hadoop.apache.org 
抄送: 
主题: Re: block replication 
 
thanks Hardik,


I did a bit of reading on 'stale' state in HDFS-3703 it says stale state as  a state that
is between dead and alive. 

And that the value for marking a node as dead is 10.30 minutes. 

But can this be configured?


Please help.




On Wed, Jan 1, 2014 at 2:46 AM, Hardik Pandya <smarty.juice@gmail.com> wrote:

<property>
  <name>dfs.heartbeat.interval</name>
  <value>3</value>
  <description>Determines datanode heartbeat interval in seconds.</description>
</property>


and may be you are looking for 




<property>
  <name>dfs.namenode.stale.datanode.interval</name>
  <value>30000</value>
  <description>
    Default time interval for marking a datanode as "stale", i.e., if 
    the namenode has not received heartbeat msg from a datanode for 
    more than this time interval, the datanode will be marked and treated 
    as "stale" by default. The stale interval cannot be too small since 
    otherwise this may cause too frequent change of stale states. 
    We thus set a minimum stale interval value (the default value is 3 times 
    of heartbeat interval) and guarantee that the stale interval cannot be less
    than the minimum value.
  </description>



On Fri, Dec 27, 2013 at 10:10 PM, Vishnu Viswanath <vishnu.viswanath25@gmail.com> wrote:

well i couldn't find any property in http://hadoop.apache.org/docs/r1.2.1/hdfs-default.html
that sets the time interval time consider a node as dead. 


I saw there is a property dfs.namenode.heartbeat.recheck-interval or heartbeat.recheck.interval,
but i couldn't find it there. is it removed?
or am i looking at wrong place?



On Sat, Dec 28, 2013 at 7:36 AM, Chris Embree <cembree@gmail.com> wrote:

Maybe I'm just grouchy tonight.. it's seems all of these questions can be answered by RTFM.
 http://hadoop.apache.org/docs/current2/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html


What's the balance between encouraging learning by New to Hadoop users and OMG!?  



On Fri, Dec 27, 2013 at 8:58 PM, Vishnu Viswanath <vishnu.viswanath25@gmail.com> wrote:

Hi all,


Can someone tell me these:


1) which property in hadoop conf sets the time limit to consider a node as dead?
2) after detecting a node as dead, after how much time does hadoop replicates the block to
another node?
3) if the dead node comes alive again, in how much time does hadoop identifies a block as
over-replicated and when does it deletes that block?


Regards,
Mime
View raw message