hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron T. Myers (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-7652) Provide a mechanism for a client Hadoop configuration to 'poison' daemon startup; i.e., disallow daemon start up on a client config.
Date Mon, 19 Sep 2011 02:04:09 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-7652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13107586#comment-13107586
] 

Aaron T. Myers commented on HADOOP-7652:
----------------------------------------

I might even be in favor of attempting to entirely separate client config files from server
config files. One could imagine that an NN would not start if {{hdfs-server.xml}} were not
present, and that client machines would only receive {{hdfs-client.xml}}, for example. This
would also potentially solve the problem identified by HADOOP-7621, wherein it's not presently
possible for Hadoop configs to contain a "secret" value which clients don't have access to.

As an analog, Kerberos has both {{/etc/krb5.conf}} and {{/etc/krb5kdc/kdc.conf}}. The former
must be present on both server and client machines, while the latter need only be present
on the servers and is usually not world-readable.

> Provide a mechanism for a client Hadoop configuration to 'poison' daemon startup; i.e.,
disallow daemon start up on a client config.
> ------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-7652
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7652
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: conf
>            Reporter: Philip Zeyliger
>
> We've seen folks who have been given Hadoop configuration to act as a client accidentally
type "hadoop namenode" and get things into a confused, or incorrect state.  Most recently,
we've seen data corruption when users accidentally run extra secondary namenodes (https://issues.apache.org/jira/browse/HDFS-2305).
> I'd like to propose that we introduce a configuration property, say, "client.poison.servers",
which, if set, disables the Hadoop daemons (nn, snn, jt, tt, etc.) with a reasonable error
message.  Hadoop administrators can hand out/install configs that are on machines intended
to just be clients with a little less worry that they'll accidentally get run.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message