hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron T. Myers (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-2430) The number of failed or low-resource volumes the NN can tolerate should be configurable
Date Fri, 14 Oct 2011 07:12:12 GMT

    [ https://issues.apache.org/jira/browse/HDFS-2430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13127324#comment-13127324
] 

Aaron T. Myers commented on HDFS-2430:
--------------------------------------

While just being able to configure the number of tolerated dir failures from the single pool
of dirs specified by the {{dfs.namenode.name.dir}} config would be an improvement, I actually
think the best solution would to be able to specify multiple distinct pools of name dirs,
where the number of failed volumes tolerated could be configured per-pool. This will be useful
since not all name dirs are created equal. For example, if an operator has 4 name dirs configured,
3 of which are local, with the other being on a remote machine mounted via NFS, the operator
might want to configure the NN to tolerate up to 2 failures of the 3 local dirs, but stop
immediately if the NFS dir goes away. I think such a scheme will be necessary for an NFS-based
HA solution, as described as one of the options in HDFS-1623, but this configuration can be
useful generally as well, so might as well be developed on trunk.

I can imagine two high-level designs:

# The operator can configure exactly two distinct pools of name dirs, configured via something
like {{dfs.namenode.name.dirs.required}} and {{dfs.namenode.name.dirs.redundant}}. If any
single dir specified in the {{.required}} config goes offline, the NN will not continue to
operate. The number of acceptable failed dirs in the {{.redundant}} pool would be configurable
by a third option, {{dfs.name.node.name.dirs.failures.tolerated}}.
# The operator can specify N distinct pools of name dirs, configured via something like {{dfs.namenode.name.dirs.pool.0}},
{{dfs.namenode.name.dirs.pool.1}}, etc. For each of these configured pools, the number of
failed volumes tolerated could be configured individually, e.g. {{dfs.namenode.name.dirs.failures.tolerated.pool.0}},
{{dfs.namenode.name.dirs.failures.tolerated.pool.1}}, etc.

Under either of these schemes, it would be an error to specify the same dir in multiple pools.

Any thoughts? Option 2 is obviously more flexible, but I don't want to thrust our operators
deeper into configuration hell than they already are.
                
> The number of failed or low-resource volumes the NN can tolerate should be configurable
> ---------------------------------------------------------------------------------------
>
>                 Key: HDFS-2430
>                 URL: https://issues.apache.org/jira/browse/HDFS-2430
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: name-node
>    Affects Versions: 0.24.0
>            Reporter: Aaron T. Myers
>            Assignee: Aaron T. Myers
>
> Currently the number of failed or low-resource volumes the NN can tolerate is effectively
hard-coded at 1. It would be nice if this were configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message