Return-Path: X-Original-To: apmail-ambari-dev-archive@www.apache.org Delivered-To: apmail-ambari-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 901F718C1E for ; Thu, 9 Jul 2015 14:27:06 +0000 (UTC) Received: (qmail 9302 invoked by uid 500); 9 Jul 2015 14:27:06 -0000 Delivered-To: apmail-ambari-dev-archive@ambari.apache.org Received: (qmail 9269 invoked by uid 500); 9 Jul 2015 14:27:06 -0000 Mailing-List: contact dev-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@ambari.apache.org Delivered-To: mailing list dev@ambari.apache.org Received: (qmail 9252 invoked by uid 99); 9 Jul 2015 14:27:06 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Jul 2015 14:27:06 +0000 Date: Thu, 9 Jul 2015 14:27:06 +0000 (UTC) From: "Vitaly Brodetskyi (JIRA)" To: dev@ambari.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (AMBARI-12355) DATANODE START failed on secure cluster MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 Vitaly Brodetskyi created AMBARI-12355: ------------------------------------------ Summary: DATANODE START failed on secure cluster Key: AMBARI-12355 URL: https://issues.apache.org/jira/browse/AMBARI-12355 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.1.0 Reporter: Vitaly Brodetskyi Assignee: Vitaly Brodetskyi Priority: Blocker Fix For: 2.1.0 *STR* # install ambari=20 # deploy cluster # enable security # stop all services # start all services *AR* DATANODE START failed {code} "stderr" : "Traceback (most recent call last):\n File \"/var/lib= /ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode= .py\", line 153, in \n DataNode().execute()\n File \"/usr/lib/p= ython2.6/site-packages/resource_management/libraries/script/script.py\", li= ne 216, in execute\n method(env)\n File \"/var/lib/ambari-agent/cache/c= ommon-services/HDFS/2.1.0.2.0/package/scripts/datanode.py\", line 47, in st= art\n datanode(action=3D\"start\")\n File \"/usr/lib/python2.6/site-pac= kages/ambari_commons/os_family_impl.py\", line 89, in thunk\n return fn(= *args, **kwargs)\n File \"/var/lib/ambari-agent/cache/common-services/HDFS= /2.1.0.2.0/package/scripts/hdfs_datanode.py\", line 58, in datanode\n cr= eate_log_dir=3DTrue\n File \"/var/lib/ambari-agent/cache/common-services/H= DFS/2.1.0.2.0/package/scripts/utils.py\", line 266, in service\n environ= ment=3Dhadoop_env_exports\n File \"/usr/lib/python2.6/site-packages/resour= ce_management/core/base.py\", line 157, in __init__\n self.env.run()\n = File \"/usr/lib/python2.6/site-packages/resource_management/core/environmen= t.py\", line 152, in run\n self.run_action(resource, action)\n File \"/= usr/lib/python2.6/site-packages/resource_management/core/environment.py\", = line 118, in run_action\n provider_action()\n File \"/usr/lib/python2.6= /site-packages/resource_management/core/providers/system.py\", line 258, in= action_run\n tries=3Dself.resource.tries, try_sleep=3Dself.resource.try= _sleep)\n File \"/usr/lib/python2.6/site-packages/resource_management/core= /shell.py\", line 70, in inner\n result =3D function(command, **kwargs)\= n File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.p= y\", line 92, in checked_call\n tries=3Dtries, try_sleep=3Dtry_sleep)\n = File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\= ", line 140, in _call_wrapper\n result =3D _call(command, **kwargs_copy)= \n File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.= py\", line 291, in _call\n raise Fail(err_msg)\nresource_management.core= .exceptions.Fail: Execution of 'ambari-sudo.sh -H -E /usr/hdp/current/hado= op-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/con= f start datanode' returned 1. starting datanode, logging to /grid/0/log/had= oop/hdfs/hadoop-hdfs-datanode-ip-172-31-38-11.out", "stdout" : "2015-07-08 04:15:45,024 - Group['hadoop'] {'ignore_failures= ': False}\n2015-07-08 04:15:45,026 - Group['users'] {'ignore_failures': Fal= se}\n2015-07-08 04:15:45,027 - Group['knox'] {'ignore_failures': False}\n20= 15-07-08 04:15:45,027 - Group['spark'] {'ignore_failures': False}\n2015-07-= 08 04:15:45,028 - User['oozie'] {'gid': 'hadoop', 'ignore_failures': False,= 'groups': ['users']}\n2015-07-08 04:15:45,029 - User['hive'] {'gid': 'hado= op', 'ignore_failures': False, 'groups': ['hadoop']}\n2015-07-08 04:15:45,0= 31 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups'= : ['users']}\n2015-07-08 04:15:45,032 - User['flume'] {'gid': 'hadoop', 'ig= nore_failures': False, 'groups': ['hadoop']}\n2015-07-08 04:15:45,034 - Use= r['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}= \n2015-07-08 04:15:45,036 - User['knox'] {'gid': 'hadoop', 'ignore_failures= ': False, 'groups': ['hadoop']}\n2015-07-08 04:15:45,037 - User['storm'] {'= gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}\n2015-07-08= 04:15:45,039 - User['spark'] {'gid': 'hadoop', 'ignore_failures': False, '= groups': ['hadoop']}\n2015-07-08 04:15:45,041 - User['mapred'] {'gid': 'had= oop', 'ignore_failures': False, 'groups': ['hadoop']}\n2015-07-08 04:15:45,= 043 - User['accumulo'] {'gid': 'hadoop', 'ignore_failures': False, 'groups'= : ['hadoop']}\n2015-07-08 04:15:45,046 - User['hbase'] {'gid': 'hadoop', 'i= gnore_failures': False, 'groups': ['hadoop']}\n2015-07-08 04:15:45,048 - Us= er['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['users']}\= n2015-07-08 04:15:45,049 - User['zookeeper'] {'gid': 'hadoop', 'ignore_fail= ures': False, 'groups': ['hadoop']}\n2015-07-08 04:15:45,050 - User['falcon= '] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['users']}\n2015-0= 7-08 04:15:45,052 - User['sqoop'] {'gid': 'hadoop', 'ignore_failures': Fals= e, 'groups': ['hadoop']}\n2015-07-08 04:15:45,054 - User['yarn'] {'gid': 'h= adoop', 'ignore_failures': False, 'groups': ['hadoop']}\n2015-07-08 04:15:4= 5,056 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': = ['hadoop']}\n2015-07-08 04:15:45,058 - User['ams'] {'gid': 'hadoop', 'ignor= e_failures': False, 'groups': ['hadoop']}\n2015-07-08 04:15:45,059 - User['= atlas'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}\n= 2015-07-08 04:15:45,061 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh= '] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}\n2015-07-0= 8 04:15:45,064 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambar= i-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/a= mbari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 10= 00) || (false)'}\n2015-07-08 04:15:45,117 - Skipping Execute['/var/lib/amba= ri-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfd= ata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to = not_if\n2015-07-08 04:15:45,118 - Directory['/tmp/hbase-hbase'] {'owner': '= hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}\n2015-07-08 04:1= 5:45,122 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': = StaticFile('changeToSecureUid.sh'), 'mode': 0555}\n2015-07-08 04:15:45,125 = - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/t= mp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test= $(id -u hbase) -gt 1000) || (false)'}\n2015-07-08 04:15:45,178 - Skipping = Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/tmp= /hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if\n2015= -07-08 04:15:45,180 - Group['hdfs'] {'ignore_failures': False}\n2015-07-08 = 04:15:45,181 - User['hdfs'] {'ignore_failures': False, 'groups': ['hadoop',= 'hdfs']}\n2015-07-08 04:15:45,183 - Directory['/etc/hadoop'] {'mode': 0755= }\n2015-07-08 04:15:45,210 - File['/usr/hdp/current/hadoop-client/conf/hado= op-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'had= oop'}\n2015-07-08 04:15:45,235 - Execute['('setenforce', '0')'] {'not_if': = '(! which getenforce ) || (which getenforce && getenforce | grep -q Disable= d)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}\n2015-07-08 04:15= :45,393 - Directory['/grid/0/log/hadoop'] {'owner': 'root', 'mode': 0775, '= group': 'hadoop', 'recursive': True, 'cd_access': 'a'}\n2015-07-08 04:15:45= ,397 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'rec= ursive': True, 'cd_access': 'a'}\n2015-07-08 04:15:45,397 - Changing owner = for /var/run/hadoop from 2527 to root\n2015-07-08 04:15:45,397 - Changing g= roup for /var/run/hadoop from 550 to root\n2015-07-08 04:15:45,398 - Direct= ory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': '= a'}\n2015-07-08 04:15:45,406 - File['/usr/hdp/current/hadoop-client/conf/co= mmons-logging.properties'] {'content': Template('commons-logging.properties= .j2'), 'owner': 'root'}\n2015-07-08 04:15:45,410 - File['/usr/hdp/current/h= adoop-client/conf/health_check'] {'content': Template('health_check.j2'), '= owner': 'root'}\n2015-07-08 04:15:45,411 - File['/usr/hdp/current/hadoop-cl= ient/conf/log4j.properties'] {'content': '...', 'owner': 'hdfs', 'group': '= hadoop', 'mode': 0644}\n2015-07-08 04:15:45,427 - File['/usr/hdp/current/ha= doop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-= metrics2.properties.j2'), 'owner': 'hdfs'}\n2015-07-08 04:15:45,428 - File[= '/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': St= aticFile('task-log4j.properties'), 'mode': 0755}\n2015-07-08 04:15:45,430 -= File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'h= dfs', 'group': 'hadoop'}\n2015-07-08 04:15:45,438 - File['/etc/hadoop/conf/= topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_ma= ppings.data.j2'), 'group': 'hadoop'}\n2015-07-08 04:15:45,440 - File['/etc/= hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py= '), 'mode': 0755}\n2015-07-08 04:15:45,790 - Directory['/etc/security/limit= s.d'] {'owner': 'root', 'group': 'root', 'recursive': True}\n2015-07-08 04:= 15:45,801 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('= hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}\n2015-07-08= 04:15:45,802 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': '= hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_= attributes': {}, 'configurations': ...}\n2015-07-08 04:15:45,824 - Generati= ng config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml\n2015-07-0= 8 04:15:45,824 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xm= l'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'm= ode': None, 'encoding': 'UTF-8'}\n2015-07-08 04:15:45,840 - Writing File['/= usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] because contents don= 't match\n2015-07-08 04:15:45,841 - XmlConfig['ssl-client.xml'] {'owner': '= hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf'= , 'configuration_attributes': {}, 'configurations': ...}\n2015-07-08 04:15:= 45,859 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.= xml\n2015-07-08 04:15:45,860 - File['/usr/hdp/current/hadoop-client/conf/ss= l-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': '= hadoop', 'mode': None, 'encoding': 'UTF-8'}\n2015-07-08 04:15:45,870 - Writ= ing File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] because cont= ents don't match\n2015-07-08 04:15:45,871 - Directory['/usr/hdp/current/had= oop-client/conf/secure'] {'owner': 'root', 'group': 'hadoop', 'recursive': = True, 'cd_access': 'a'}\n2015-07-08 04:15:45,872 - XmlConfig['ssl-client.xm= l'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hado= op-client/conf/secure', 'configuration_attributes': {}, 'configurations': .= ..}\n2015-07-08 04:15:45,889 - Generating config: /usr/hdp/current/hadoop-c= lient/conf/secure/ssl-client.xml\n2015-07-08 04:15:45,889 - File['/usr/hdp/= current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'conte= nt': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF= -8'}\n2015-07-08 04:15:45,899 - Writing File['/usr/hdp/current/hadoop-clien= t/conf/secure/ssl-client.xml'] because contents don't match\n2015-07-08 04:= 15:45,900 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop'= , 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attribu= tes': {}, 'configurations': ...}\n2015-07-08 04:15:45,916 - Generating conf= ig: /usr/hdp/current/hadoop-client/conf/ssl-server.xml\n2015-07-08 04:15:45= ,917 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner':= 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, '= encoding': 'UTF-8'}\n2015-07-08 04:15:45,928 - Writing File['/usr/hdp/curre= nt/hadoop-client/conf/ssl-server.xml'] because contents don't match\n2015-0= 7-08 04:15:45,929 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': '= hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_= attributes': {}, 'configurations': ...}\n2015-07-08 04:15:45,948 - Generati= ng config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml\n2015-07-08 04= :15:45,949 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'ow= ner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': No= ne, 'encoding': 'UTF-8'}\n2015-07-08 04:15:46,041 - Writing File['/usr/hdp/= current/hadoop-client/conf/hdfs-site.xml'] because contents don't match\n20= 15-07-08 04:15:46,043 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'con= f_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration= _attributes': {}, 'owner': 'hdfs', 'configurations': ...}\n2015-07-08 04:15= :46,063 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.= xml\n2015-07-08 04:15:46,063 - File['/usr/hdp/current/hadoop-client/conf/co= re-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'h= adoop', 'mode': 0644, 'encoding': 'UTF-8'}\n2015-07-08 04:15:46,114 - Writi= ng File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] because conten= ts don't match\n2015-07-08 04:15:46,118 - File['/usr/hdp/current/hadoop-cli= ent/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'root'}\n2015= -07-08 04:15:46,119 - Package['hadoop-lzo'] {}\n2015-07-08 04:15:46,305 - S= kipping installation of existing package hadoop-lzo\n2015-07-08 04:15:46,30= 5 - Package['lzo'] {}\n2015-07-08 04:15:46,372 - Skipping installation of e= xisting package lzo\n2015-07-08 04:15:46,372 - Package['hadoop-lzo-native']= {}\n2015-07-08 04:15:46,437 - Skipping installation of existing package ha= doop-lzo-native\n2015-07-08 04:15:46,437 - Package['hadooplzo_2_3_*'] {}\n2= 015-07-08 04:15:46,502 - Skipping installation of existing package hadooplz= o_2_3_*\n2015-07-08 04:15:46,503 - Directory['/var/lib/hadoop-hdfs'] {'owne= r': 'hdfs', 'group': 'hadoop', 'mode': 0751, 'recursive': True}\n2015-07-08= 04:15:46,552 - Host contains mounts: ['/', '/proc', '/sys', '/dev/pts', '/= dev/shm', '/proc/sys/fs/binfmt_misc', '/grid/0', '/grid/1', '/grid/2', '/gr= id/3'].\n2015-07-08 04:15:46,552 - Mount point for directory /grid/0/hadoop= /hdfs/data is /grid/0\n2015-07-08 04:15:46,556 - Directory['/var/run/hadoop= '] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}\n2015-07-08 04:15:46,= 558 - Changing owner for /var/run/hadoop from 0 to hdfs\n2015-07-08 04:15:4= 6,558 - Changing group for /var/run/hadoop from 0 to hadoop\n2015-07-08 04:= 15:46,558 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'recursive'= : True}\n2015-07-08 04:15:46,559 - Directory['/grid/0/log/hadoop/hdfs'] {'o= wner': 'hdfs', 'recursive': True}\n2015-07-08 04:15:46,560 - File['/var/run= /hadoop/hdfs/hadoop-hdfs-datanode.pid'] {'action': ['delete'], 'not_if': \"= ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ls /var/run/hadoop/hdfs/hadoop-h= dfs-datanode.pid >/dev/null 2>&1 && ps -p `cat /var/run/hadoop/hdfs/hadoop-= hdfs-datanode.pid` >/dev/null 2>&1'\"}\n2015-07-08 04:15:46,705 - Deleting = File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid']\n2015-07-08 04:15:46,= 707 - Execute['ambari-sudo.sh -H -E /usr/hdp/current/hadoop-client/sbin/ha= doop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'= ] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/li= bexec'}, 'not_if': \"ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ls /var/run= /hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps -p `cat /var/ru= n/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null 2>&1'\"}",{code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)