Return-Path: X-Original-To: apmail-ambari-dev-archive@www.apache.org Delivered-To: apmail-ambari-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C01C89A38 for ; Tue, 23 Dec 2014 23:01:13 +0000 (UTC) Received: (qmail 3627 invoked by uid 500); 23 Dec 2014 23:01:13 -0000 Delivered-To: apmail-ambari-dev-archive@ambari.apache.org Received: (qmail 3588 invoked by uid 500); 23 Dec 2014 23:01:13 -0000 Mailing-List: contact dev-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@ambari.apache.org Delivered-To: mailing list dev@ambari.apache.org Received: (qmail 3576 invoked by uid 99); 23 Dec 2014 23:01:13 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 23 Dec 2014 23:01:13 +0000 Date: Tue, 23 Dec 2014 23:01:13 +0000 (UTC) From: "Hudson (JIRA)" To: dev@ambari.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (AMBARI-8477) HDFS service components should indicate security state MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/AMBARI-8477?page=3Dcom.atlassia= n.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D142= 57659#comment-14257659 ]=20 Hudson commented on AMBARI-8477: -------------------------------- FAILURE: Integrated in Ambari-trunk-Commit-docker #591 (See [https://builds= .apache.org/job/Ambari-trunk-Commit-docker/591/]) AMBARI-8477. HDFS service components should indicate security state. (rober= t levas via jaimin) (jaimin: http://git-wip-us.apache.org/repos/asf?p=3Damb= ari.git&a=3Dcommit&h=3D3f1d3dfac12e6d870c60d583af7ba1bdeb54a546) * ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/s= cripts/zkfc_slave.py * ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/s= cripts/datanode.py * ambari-common/src/main/python/resource_management/libraries/functions/sec= urity_commons.py * ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/s= cripts/journalnode.py * ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/s= cripts/status_params.py * ambari-agent/src/test/python/resource_management/TestSecurityCommons.py * ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/s= cripts/snamenode.py * ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/s= cripts/namenode.py * ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/s= cripts/hdfs_client.py > HDFS service components should indicate security state > ------------------------------------------------------ > > Key: AMBARI-8477 > URL: https://issues.apache.org/jira/browse/AMBARI-8477 > Project: Ambari > Issue Type: Improvement > Components: ambari-server, stacks > Affects Versions: 2.0.0 > Reporter: Robert Levas > Assignee: Robert Levas > Labels: agent, kerberos, lifecycle, security > Fix For: 2.0.0 > > Attachments: AMBARI-8477_01.patch, AMBARI-8477_01.patch, AMBARI-8= 477_01.patch, AMBARI-8477_02.patch, AMBARI-8477_03.patch, AMBARI-8477_04.pa= tch, AMBARI-8477_04.patch, AMBARI-8477_05.patch > > > The HDFS service components should indicate security state when queried b= y Ambari Agent via STATUS_COMMAND. Each component should determine it's st= ate as follows: > h2. NAMENODE > h3. Indicators > * Command JSON > ** config\['configurations']\['cluster-env']\['security_enabled']=20 > *** =3D =E2=80=9Ctrue=E2=80=9D > * Configuration File: params.hadoop_conf_dir + '/core-site.xml' > ** hadoop.security.authentication > *** =3D =E2=80=9Ckerberos=E2=80=9D > *** required > ** hadoop.security.authorization > *** =3D =E2=80=9Ctrue=E2=80=9D > *** required > ** hadoop.rpc.protection > *** =3D =E2=80=9Cauthentication=E2=80=9D > *** required > ** hadoop.security.auth_to_local > *** not empty > *** required > * Configuration File: /params.hadoop_conf_dir + '/hdfs-site.xml' > ** dfs.namenode.keytab.file > *** not empty > *** path exists and is readable > *** required > ** dfs.namenode.kerberos.principal > *** not empty > *** required > h3. Pseudocode > {code} > if indicators imply security is on and validate > if kinit(namenode principal) && kinit(https principal) succeeds > state =3D SECURED_KERBEROS > else > state =3D ERROR=20 > else > state =3D UNSECURED > {code} > h2. DATANODE > h3. Indicators > * Command JSON > ** config\['configurations']\['cluster-env']\['security_enabled']=20 > *** =3D =E2=80=9Ctrue=E2=80=9D > * Configuration File: params.hadoop_conf_dir + '/core-site.xml' > ** hadoop.security.authentication > *** =3D =E2=80=9Ckerberos=E2=80=9D > *** required > ** hadoop.security.authorization > *** =3D =E2=80=9Ctrue=E2=80=9D > *** required > ** hadoop.rpc.protection > *** =3D =E2=80=9Cauthentication=E2=80=9D > *** required > ** hadoop.security.auth_to_local > *** not empty > *** required > * Configuration File: params.hadoop_conf_dir + '/hdfs-site.xml' > ** dfs.datanode.keytab.file > *** not empty > *** path exists and is readable > *** required > ** dfs.datanode.kerberos.principal > *** not empty > *** required > h3. Pseudocode > {code} > if indicators imply security is on and validate > if kinit(datanode principal) && kinit(https principal) succeeds > state =3D SECURED_KERBEROS > else > state =3D ERROR=20 > else > state =3D UNSECURED > {code} > h2. SECONDARY_NAMENODE > h3. Indicators > * Command JSON > ** config\['configurations']\['cluster-env']\['security_enabled']=20 > *** =3D =E2=80=9Ctrue=E2=80=9D > * Configuration File: params.hadoop_conf_dir + '/core-site.xml' > ** hadoop.security.authentication > *** =3D =E2=80=9Ckerberos=E2=80=9D > *** required > ** hadoop.security.authorization > *** =3D =E2=80=9Ctrue=E2=80=9D > *** required > ** hadoop.rpc.protection > *** =3D =E2=80=9Cauthentication=E2=80=9D > *** required > ** hadoop.security.auth_to_local > *** not empty > *** required > * Configuration File: params.hadoop_conf_dir + '/hdfs-site.xml' > ** dfs.secondary.namenode.keytab.file > *** not empty > *** path exists and is readable > *** required > ** dfs.secondary.namenode.kerberos.principal > *** not empty > *** required > h3. Pseudocode > {code} > if indicators imply security is on and validate > if kinit(namenode principal) && kinit(https principal) succeeds > state =3D SECURED_KERBEROS > else > state =3D ERROR=20 > else > state =3D UNSECURED > {code} > h2. HDFS_CLIENT > h3. Indicators > * Command JSON > ** config\['configurations']\['cluster-env']\['security_enabled']=20 > *** =3D =E2=80=9Ctrue=E2=80=9D > * Configuration File: params.hadoop_conf_dir + '/core-site.xml' > ** hadoop.security.authentication > *** =3D =E2=80=9Ckerberos=E2=80=9D > *** required > ** hadoop.security.authorization > *** =3D =E2=80=9Ctrue=E2=80=9D > *** required > ** hadoop.rpc.protection > *** =3D =E2=80=9Cauthentication=E2=80=9D > *** required > ** hadoop.security.auth_to_local > *** not empty > *** required > * Env Params: hadoop-env > ** hdfs_user_keytab > *** not empty > *** path exists and is readable > *** required > ** hdfs_user_principal > *** not empty > *** required > h3. Pseudocode > {code} > if indicators imply security is on and validate > if kinit(hdfs user principal) succeeds > state =3D SECURED_KERBEROS > else > state =3D ERROR=20 > else > state =3D UNSECURED > {code} > h2. JOURNALNODE > h3. Indicators > * Command JSON > ** config\['configurations']\['cluster-env']\['security_enabled']=20 > *** =3D =E2=80=9Ctrue=E2=80=9D > * Configuration File: params.hadoop_conf_dir + '/core-site.xml' > ** hadoop.security.authentication > *** =3D =E2=80=9Ckerberos=E2=80=9D > *** required > ** hadoop.security.authorization > *** =3D =E2=80=9Ctrue=E2=80=9D > *** required > ** hadoop.rpc.protection > *** =3D =E2=80=9Cauthentication=E2=80=9D > *** required > ** hadoop.security.auth_to_local > *** not empty > *** required > * Configuration File: /params.hadoop_conf_dir + '/hdfs-site.xml' > ** dfs.journalnode.keytab.file > *** not empty > *** path exists and is readable > *** required > ** dfs.journalnode.kerberos.principal > *** not empty > *** required > h3. Pseudocode > {code} > if indicators imply security is on and validate > state =3D SECURED_KERBEROS > else > state =3D UNSECURED > {code} > h2. ZKFC > h3. Indicators > * Command JSON > ** config\['configurations']\['cluster-env']\['security_enabled']=20 > *** =3D =E2=80=9Ctrue=E2=80=9D > * Configuration File: params.hadoop_conf_dir + '/core-site.xml' > ** hadoop.security.authentication > *** =3D =E2=80=9Ckerberos=E2=80=9D > *** required > ** hadoop.security.authorization > *** =3D =E2=80=9Ctrue=E2=80=9D > *** required > ** hadoop.rpc.protection > *** =3D =E2=80=9Cauthentication=E2=80=9D > *** required > ** hadoop.security.auth_to_local > *** not empty > *** required > h3. Pseudocode > {code} > if indicators imply security is on and validate > state =3D SECURED_KERBEROS > else > state =3D UNSECURED > {code} > _*Note*_: Due to the _cost_ of calling {{kinit}} results should be cached= for a period of time before retrying. This may be an issue depending on t= he frequency of the heartbeat timeout. > _*Note*_: {{kinit}} calls should specify a _temporary_ cache file which s= hould be destroyed after command is executed - BUG-29477 -- This message was sent by Atlassian JIRA (v6.3.4#6332)