hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Liang Xie (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5917) Have an ability to refresh deadNodes list periodically
Date Mon, 10 Feb 2014 03:24:19 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13896184#comment-13896184
] 

Liang Xie commented on HDFS-5917:
---------------------------------

[~saint.ack@gmail.com], could you or let other guys have a look at it? thanks!

> Have an ability to refresh deadNodes list periodically
> ------------------------------------------------------
>
>                 Key: HDFS-5917
>                 URL: https://issues.apache.org/jira/browse/HDFS-5917
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.0.0, 2.2.0
>            Reporter: Liang Xie
>            Assignee: Liang Xie
>         Attachments: HDFS-5917.txt
>
>
> In current HBase + HDFS trunk impl, if one node is inserted into deadNodes list, before
deadNodes.clear() be invoked, this node could not be choose always. When i fixed HDFS-5637,
i had a raw thought, since there're not a few conditions could trigger a node be inserted
into deadNodes,  we should have an ability to refresh this important cache list info automaticly.
It's benefit for HBase scenario at least, e.g. before HDFS-5637 fixed, if a local node be
inserted into deadNodes, then it will read remotely even the local node is not dead:) if more
unfortunately, this block is in a huge HFile which doesn't be picked into any minor compaction
in short period, the performance penality will be continued until a large compaction or region
reopend or deadNodes.clear() be invoked...



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message