ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Levas (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (AMBARI-8477) HDFS service components should indicate security state
Date Tue, 23 Dec 2014 10:43:13 GMT

     [ https://issues.apache.org/jira/browse/AMBARI-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Robert Levas updated AMBARI-8477:
---------------------------------
    Attachment: AMBARI-8477_04.patch

Trying again with patch [^AMBARI-8477_04.patch] - failures were unrelated

> HDFS service components should indicate security state
> ------------------------------------------------------
>
>                 Key: AMBARI-8477
>                 URL: https://issues.apache.org/jira/browse/AMBARI-8477
>             Project: Ambari
>          Issue Type: Improvement
>          Components: ambari-server, stacks
>    Affects Versions: 2.0.0
>            Reporter: Robert Levas
>            Assignee: Robert Levas
>              Labels: agent, kerberos, lifecycle, security
>             Fix For: 2.0.0
>
>         Attachments: AMBARI-8477_01.patch, AMBARI-8477_01.patch, AMBARI-8477_01.patch,
AMBARI-8477_02.patch, AMBARI-8477_03.patch, AMBARI-8477_04.patch, AMBARI-8477_04.patch
>
>
> The HDFS service components should indicate security state when queried by Ambari Agent
via STATUS_COMMAND.  Each component should determine it's state as follows:
> h2. NAMENODE
> h3. Indicators
> * Command JSON
> ** config\['configurations']\['cluster-env']\['security_enabled'] 
> *** = “true”
> * Configuration File: params.hadoop_conf_dir + '/core-site.xml'
> ** hadoop.security.authentication
> *** = “kerberos”
> *** required
> ** hadoop.security.authorization
> *** = “true”
> *** required
> ** hadoop.rpc.protection
> *** = “authentication”
> *** required
> ** hadoop.security.auth_to_local
> *** not empty
> *** required
> * Configuration File: /params.hadoop_conf_dir + '/hdfs-site.xml'
> ** dfs.namenode.keytab.file
> *** not empty
> *** path exists and is readable
> *** required
> ** dfs.namenode.kerberos.principal
> *** not empty
> *** required
> h3. Pseudocode
> {code}
> if indicators imply security is on and validate
>     if kinit(namenode principal) && kinit(https principal) succeeds
>         state = SECURED_KERBEROS
>     else
>         state = ERROR 
> else
>     state = UNSECURED
> {code}
> h2. DATANODE
> h3. Indicators
> * Command JSON
> ** config\['configurations']\['cluster-env']\['security_enabled'] 
> *** = “true”
> * Configuration File: params.hadoop_conf_dir + '/core-site.xml'
> ** hadoop.security.authentication
> *** = “kerberos”
> *** required
> ** hadoop.security.authorization
> *** = “true”
> *** required
> ** hadoop.rpc.protection
> *** = “authentication”
> *** required
> ** hadoop.security.auth_to_local
> *** not empty
> *** required
> * Configuration File: params.hadoop_conf_dir + '/hdfs-site.xml'
> ** dfs.datanode.keytab.file
> *** not empty
> *** path exists and is readable
> *** required
> ** dfs.datanode.kerberos.principal
> *** not empty
> *** required
> h3. Pseudocode
> {code}
> if indicators imply security is on and validate
>     if kinit(datanode principal) && kinit(https principal) succeeds
>         state = SECURED_KERBEROS
>     else
>         state = ERROR 
> else
>     state = UNSECURED
> {code}
> h2. SECONDARY_NAMENODE
> h3. Indicators
> * Command JSON
> ** config\['configurations']\['cluster-env']\['security_enabled'] 
> *** = “true”
> * Configuration File: params.hadoop_conf_dir + '/core-site.xml'
> ** hadoop.security.authentication
> *** = “kerberos”
> *** required
> ** hadoop.security.authorization
> *** = “true”
> *** required
> ** hadoop.rpc.protection
> *** = “authentication”
> *** required
> ** hadoop.security.auth_to_local
> *** not empty
> *** required
> * Configuration File: params.hadoop_conf_dir + '/hdfs-site.xml'
> ** dfs.secondary.namenode.keytab.file
> *** not empty
> *** path exists and is readable
> *** required
> ** dfs.secondary.namenode.kerberos.principal
> *** not empty
> *** required
> h3. Pseudocode
> {code}
> if indicators imply security is on and validate
>     if kinit(namenode principal) && kinit(https principal) succeeds
>         state = SECURED_KERBEROS
>     else
>         state = ERROR 
> else
>     state = UNSECURED
> {code}
> h2. HDFS_CLIENT
> h3. Indicators
> * Command JSON
> ** config\['configurations']\['cluster-env']\['security_enabled'] 
> *** = “true”
> * Configuration File: params.hadoop_conf_dir + '/core-site.xml'
> ** hadoop.security.authentication
> *** = “kerberos”
> *** required
> ** hadoop.security.authorization
> *** = “true”
> *** required
> ** hadoop.rpc.protection
> *** = “authentication”
> *** required
> ** hadoop.security.auth_to_local
> *** not empty
> *** required
> * Configuration File: params.hadoop_conf_dir + '/hdfs-site.xml'
> ** dfs.web.authentication.kerberos.keytab
> *** not empty
> *** path exists and is readable
> *** required
> ** dfs.web.authentication.kerberos.principal
> *** not empty
> *** required
> h3. Pseudocode
> {code}
> if indicators imply security is on and validate
>     if kinit(hdfs web principal) succeeds
>         state = SECURED_KERBEROS
>     else
>         state = ERROR 
> else
>     state = UNSECURED
> {code}
> h2. JOURNALNODE
> h3. Indicators
> * Command JSON
> ** config\['configurations']\['cluster-env']\['security_enabled'] 
> *** = “true”
> * Configuration File: params.hadoop_conf_dir + '/core-site.xml'
> ** hadoop.security.authentication
> *** = “kerberos”
> *** required
> ** hadoop.security.authorization
> *** = “true”
> *** required
> ** hadoop.rpc.protection
> *** = “authentication”
> *** required
> ** hadoop.security.auth_to_local
> *** not empty
> *** required
> * Configuration File: /params.hadoop_conf_dir + '/hdfs-site.xml'
> ** dfs.journalnode.keytab.file
> *** not empty
> *** path exists and is readable
> *** required
> ** dfs.journalnode.kerberos.principal
> *** not empty
> *** required
> h3. Pseudocode
> {code}
> if indicators imply security is on and validate
>     state = SECURED_KERBEROS
> else
>     state = UNSECURED
> {code}
> h2. ZKFC
> h3. Indicators
> * Command JSON
> ** config\['configurations']\['cluster-env']\['security_enabled'] 
> *** = “true”
> * Configuration File: params.hadoop_conf_dir + '/core-site.xml'
> ** hadoop.security.authentication
> *** = “kerberos”
> *** required
> ** hadoop.security.authorization
> *** = “true”
> *** required
> ** hadoop.rpc.protection
> *** = “authentication”
> *** required
> ** hadoop.security.auth_to_local
> *** not empty
> *** required
> h3. Pseudocode
> {code}
> if indicators imply security is on and validate
>     state = SECURED_KERBEROS
> else
>     state = UNSECURED
> {code}
> _*Note*_: Due to the _cost_ of calling {{kinit}} results should be cached for a period
of time before retrying.  This may be an issue depending on the frequency of the heartbeat
timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message