hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Suresh Srinivas (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3703) Decrease the datanode failure detection time
Date Mon, 23 Jul 2012 20:21:36 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13420935#comment-13420935
] 

Suresh Srinivas commented on HDFS-3703:
---------------------------------------

bq. whereas inbound client traffic won't cause cascading failures
Can you describe why this would cause cascading failures?

bq. So you might actually end up with less effective bandwidth than you would have had without
the stale state. I guess whatever solution we come up with will have to be tested with some
real world workloads to see what happens.
Heartbeats are processed fairly efficiently in Namenode. I do not think we need worry a lot
about it. I have seen very few such stale heartbeats in really large clusters.

bq. I guess whatever solution we come up with will have to be tested with some real world
workloads to see what happens.
That is why the flag to turn off the feature, if one is not comfortable.
                
> Decrease the datanode failure detection time
> --------------------------------------------
>
>                 Key: HDFS-3703
>                 URL: https://issues.apache.org/jira/browse/HDFS-3703
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node, name-node
>    Affects Versions: 1.0.3, 2.0.0-alpha
>            Reporter: nkeywal
>            Assignee: Suresh Srinivas
>
> By default, if a box dies, the datanode will be marked as dead by the namenode after
10:30 minutes. In the meantime, this datanode will still be proposed  by the nanenode to write
blocks or to read replicas. It happens as well if the datanode crashes: there is no shutdown
hooks to tell the nanemode we're not there anymore.
> It especially an issue with HBase. HBase regionserver timeout for production is often
30s. So with these configs, when a box dies HBase starts to recover after 30s and, while 10
minutes, the namenode will consider the blocks on the same box as available. Beyond the write
errors, this will trigger a lot of missed reads:
> - during the recovery, HBase needs to read the blocks used on the dead box (the ones
in the 'HBase Write-Ahead-Log')
> - after the recovery, reading these data blocks (the 'HBase region') will fail 33% of
the time with the default number of replica, slowering the data access, especially when the
errors are socket timeout (i.e. around 60s most of the time). 
> Globally, it would be ideal if HDFS settings could be under HBase settings. 
> As a side note, HBase relies on ZooKeeper to detect regionservers issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message