Return-Path: X-Original-To: apmail-ambari-dev-archive@www.apache.org Delivered-To: apmail-ambari-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8F5799940 for ; Tue, 16 Dec 2014 14:41:13 +0000 (UTC) Received: (qmail 5434 invoked by uid 500); 16 Dec 2014 14:41:13 -0000 Delivered-To: apmail-ambari-dev-archive@ambari.apache.org Received: (qmail 5396 invoked by uid 500); 16 Dec 2014 14:41:13 -0000 Mailing-List: contact dev-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@ambari.apache.org Delivered-To: mailing list dev@ambari.apache.org Received: (qmail 5384 invoked by uid 99); 16 Dec 2014 14:41:13 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 16 Dec 2014 14:41:13 +0000 Date: Tue, 16 Dec 2014 14:41:13 +0000 (UTC) From: "Robert Levas (JIRA)" To: dev@ambari.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (AMBARI-8477) HDFS service components should indicate security state MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/AMBARI-8477?page=3Dcom.atlassi= an.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Levas updated AMBARI-8477: --------------------------------- Attachment: AMBARI-8477_01.patch Cosmetic changes Patch File [^AMBARI-8477_01.patch] > HDFS service components should indicate security state > ------------------------------------------------------ > > Key: AMBARI-8477 > URL: https://issues.apache.org/jira/browse/AMBARI-8477 > Project: Ambari > Issue Type: Improvement > Components: ambari-server, stacks > Affects Versions: 2.0.0 > Reporter: Robert Levas > Assignee: Robert Levas > Labels: agent, kerberos, lifecycle, security > Fix For: 2.0.0 > > Attachments: AMBARI-8477_01.patch, AMBARI-8477_01.patch, AMBARI-8= 477_01.patch > > > The HDFS service components should indicate security state when queried b= y Ambari Agent via STATUS_COMMAND. Each component should determine it's st= ate as follows: > h2. NAMENODE > h3. Indicators > * Command JSON > ** config\['configurations']\['cluster-env']\['security_enabled']=20 > *** =3D =E2=80=9Ctrue=E2=80=9D > * Configuration File: params.hadoop_conf_dir + '/core-site.xml' > ** hadoop.security.authentication > *** =3D =E2=80=9Ckerberos=E2=80=9D > *** required > ** hadoop.security.authorization > *** =3D =E2=80=9Ctrue=E2=80=9D > *** required > ** hadoop.rpc.protection > *** =3D =E2=80=9Cauthentication=E2=80=9D > *** required > ** hadoop.security.auth_to_local > *** not empty > *** required > * Configuration File: /params.hadoop_conf_dir + '/hdfs-site.xml' > ** dfs.namenode.keytab.file > *** not empty > *** path exists and is readable > *** required > ** dfs.namenode.kerberos.principal > *** not empty > *** required > ** dfs.namenode.kerberos.https.principal > *** not empty > *** required > h3. Pseudocode > {code} > if indicators imply security is on and validate > if kinit(namenode principal) && kinit(https principal) succeeds > state =3D SECURED_KERBEROS > else > state =3D ERROR=20 > else > state =3D UNSECURED > {code} > h2. DATANODE > h3. Indicators > * Command JSON > ** config\['configurations']\['cluster-env']\['security_enabled']=20 > *** =3D =E2=80=9Ctrue=E2=80=9D > * Configuration File: params.hadoop_conf_dir + '/core-site.xml' > ** hadoop.security.authentication > *** =3D =E2=80=9Ckerberos=E2=80=9D > *** required > ** hadoop.security.authorization > *** =3D =E2=80=9Ctrue=E2=80=9D > *** required > ** hadoop.rpc.protection > *** =3D =E2=80=9Cauthentication=E2=80=9D > *** required > ** hadoop.security.auth_to_local > *** not empty > *** required > * Configuration File: params.hadoop_conf_dir + '/hdfs-site.xml' > ** dfs.datanode.keytab.file > *** not empty > *** path exists and is readable > *** required > ** dfs.datanode.kerberos.principal > *** not empty > *** required > ** dfs.datanode.kerberos.https.principal > *** not empty > *** required > h3. Pseudocode > {code} > if indicators imply security is on and validate > if kinit(datanode principal) && kinit(https principal) succeeds > state =3D SECURED_KERBEROS > else > state =3D ERROR=20 > else > state =3D UNSECURED > {code} > h2. SECONDARY_NAMENODE > h3. Indicators > * Command JSON > ** config\['configurations']\['cluster-env']\['security_enabled']=20 > *** =3D =E2=80=9Ctrue=E2=80=9D > * Configuration File: params.hadoop_conf_dir + '/core-site.xml' > ** hadoop.security.authentication > *** =3D =E2=80=9Ckerberos=E2=80=9D > *** required > ** hadoop.security.authorization > *** =3D =E2=80=9Ctrue=E2=80=9D > *** required > ** hadoop.rpc.protection > *** =3D =E2=80=9Cauthentication=E2=80=9D > *** required > ** hadoop.security.auth_to_local > *** not empty > *** required > * Configuration File: params.hadoop_conf_dir + '/hdfs-site.xml' > ** dfs.namenode.secondary.keytab.file > *** not empty > *** path exists and is readable > *** required > ** dfs.namenode.secondary.kerberos.principal > *** not empty > *** required > ** dfs.namenode.secondary.kerberos.https.principal > *** not empty > *** required > h3. Pseudocode > {code} > if indicators imply security is on and validate > if kinit(namenode principal) && kinit(https principal) succeeds > state =3D SECURED_KERBEROS > else > state =3D ERROR=20 > else > state =3D UNSECURED > {code} > h2. HDFS_CLIENT > h3. Indicators > * Command JSON > ** config\['configurations']\['cluster-env']\['security_enabled']=20 > *** =3D =E2=80=9Ctrue=E2=80=9D > * Configuration File: params.hadoop_conf_dir + '/core-site.xml' > ** hadoop.security.authentication > *** =3D =E2=80=9Ckerberos=E2=80=9D > *** required > ** hadoop.security.authorization > *** =3D =E2=80=9Ctrue=E2=80=9D > *** required > ** hadoop.rpc.protection > *** =3D =E2=80=9Cauthentication=E2=80=9D > *** required > ** hadoop.security.auth_to_local > *** not empty > *** required > * Configuration File: params.hadoop_conf_dir + '/hdfs-site.xml' > ** dfs.web.authentication.kerberos.keytab > *** not empty > *** path exists and is readable > *** required > ** dfs.web.authentication.kerberos.principal > *** not empty > *** required > h3. Pseudocode > {code} > if indicators imply security is on and validate > if kinit(hdfs web principal) succeeds > state =3D SECURED_KERBEROS > else > state =3D ERROR=20 > else > state =3D UNSECURED > {code} > h2. JOURNALNODE > h3. Indicators > * Command JSON > ** config\['configurations']\['cluster-env']\['security_enabled']=20 > *** =3D =E2=80=9Ctrue=E2=80=9D > * Configuration File: params.hadoop_conf_dir + '/core-site.xml' > ** hadoop.security.authentication > *** =3D =E2=80=9Ckerberos=E2=80=9D > *** required > ** hadoop.security.authorization > *** =3D =E2=80=9Ctrue=E2=80=9D > *** required > ** hadoop.rpc.protection > *** =3D =E2=80=9Cauthentication=E2=80=9D > *** required > ** hadoop.security.auth_to_local > *** not empty > *** required > h3. Pseudocode > {code} > if indicators imply security is on and validate > state =3D SECURED_KERBEROS > else > state =3D UNSECURED > {code} > h2. ZKFC > h3. Indicators > * Command JSON > ** config\['configurations']\['cluster-env']\['security_enabled']=20 > *** =3D =E2=80=9Ctrue=E2=80=9D > * Configuration File: params.hadoop_conf_dir + '/core-site.xml' > ** hadoop.security.authentication > *** =3D =E2=80=9Ckerberos=E2=80=9D > *** required > ** hadoop.security.authorization > *** =3D =E2=80=9Ctrue=E2=80=9D > *** required > ** hadoop.rpc.protection > *** =3D =E2=80=9Cauthentication=E2=80=9D > *** required > ** hadoop.security.auth_to_local > *** not empty > *** required > h3. Pseudocode > {code} > if indicators imply security is on and validate > state =3D SECURED_KERBEROS > else > state =3D UNSECURED > {code} > _*Note*_: Due to the _cost_ of calling {{kinit}} results should be cached= for a period of time before retrying. This may be an issue depending on t= he frequency of the heartbeat timeout. -- This message was sent by Atlassian JIRA (v6.3.4#6332)