hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ming Ma (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9005) Provide support for upgrade domain script
Date Tue, 10 Nov 2015 17:07:10 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14998911#comment-14998911

Ming Ma commented on HDFS-9005:

[~jmeagher] provided some input from admins point of view. The existing refreshNodes has some

* It requires admins update the hosts/excludes files on each NN local machine and then send
refreshNodes RPC to NN. In some cases, it is much easier to manage if any remote machine(with
admin privilege) can directly ask NN to decommission some nodes via RPC.
* hosts/excludes files inconsistency across NNs. Having each NN has its own copy of the files
couple introduces possible inconsistency of the files.
* RefreshNodes efficiency. For a large cluster, reloading the whole files for all DNs just
to update the property of few DNs isn't efficient.

So we wonder if we should take this opportunity to look into another approach (might have
been discussed before):

* Add new RPC to ClientProtocol to allow dfsadmin to set properties for specific DNs, that
could include decommission, upgrade domain, etc.
* The request will be persisted to an extensible state store. It could be leveldb, fsimage/edit
log, etc. Only active NN can update the state store. NNs will read from state store at start
up time. Standby NN will get notified on state store change.

> Provide support for upgrade domain script
> -----------------------------------------
>                 Key: HDFS-9005
>                 URL: https://issues.apache.org/jira/browse/HDFS-9005
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Ming Ma
> As part of the upgrade domain feature, we need to provide a mechanism to specify upgrade
domain for each datanode. One way to accomplish that is to allow admins specify an upgrade
domain script that takes DN ip or hostname as input and return the upgrade domain. Then namenode
will use it at run time to set {{DatanodeInfo}}'s upgrade domain string. The configuration
can be something like:
> {noformat}
> <property>
> <name>dfs.namenode.upgrade.domain.script.file.name</name>
> <value>/etc/hadoop/conf/upgrade-domain.sh</value>
> </property>
> {noformat}
> just like topology script, 

This message was sent by Atlassian JIRA

View raw message