ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Onischuk" <aonis...@hortonworks.com>
Subject Re: Review Request 36352: DATANODE START failed on secure cluster
Date Fri, 10 Jul 2015 11:48:15 GMT

-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36352/#review91286
-----------------------------------------------------------

Ship it!


Ship It!

- Andrew Onischuk


On July 10, 2015, 11:41 a.m., Vitalyi Brodetskyi wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/36352/
> -----------------------------------------------------------
> 
> (Updated July 10, 2015, 11:41 a.m.)
> 
> 
> Review request for Ambari and Andrew Onischuk.
> 
> 
> Bugs: AMBARI-12355
>     https://issues.apache.org/jira/browse/AMBARI-12355
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> *STR*
> # install ambari 
> # deploy cluster
> # enable security
> # stop all services
> # start all services
> *AR* DATANODE START failed
> {code}    "stderr" : "Traceback (most recent call last):\n  File \"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py\",
line 153, in <module>\n    DataNode().execute()\n  File \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
line 216, in execute\n    method(env)\n  File \"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py\",
line 47, in start\n    datanode(action=\"start\")\n  File \"/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py\",
line 89, in thunk\n    return fn(*args, **kwargs)\n  File \"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py\",
line 58, in datanode\n    create_log_dir=True\n  File \"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py\",
line 266, in service\n    environment=hadoop_env_exports\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\
 ", line 157, in __init__\n    self.env.run()\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\",
line 152, in run\n    self.run_action(resource, action)\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\",
line 118, in run_action\n    provider_action()\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\",
line 258, in action_run\n    tries=self.resource.tries, try_sleep=self.resource.try_sleep)\n
 File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 70, in
inner\n    result = function(command, **kwargs)\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\",
line 92, in checked_call\n    tries=tries, try_sleep=try_sleep)\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\",
line 140, in _call_wrapper\n    result = _call(command, **kwargs_copy)\n  File \"/usr/lib/python2.6/site-packages/resource_manageme
 nt/core/shell.py\", line 291, in _call\n    raise Fail(err_msg)\nresource_management.core.exceptions.Fail:
Execution of 'ambari-sudo.sh  -H -E /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config
/usr/hdp/current/hadoop-client/conf start datanode' returned 1. starting datanode, logging
to /grid/0/log/hadoop/hdfs/hadoop-hdfs-datanode-ip-172-31-38-11.out",
>     "stdout" : "2015-07-08 04:15:45,024 - Group['hadoop'] {'ignore_failures': False}\n2015-07-08
04:15:45,026 - Group['users'] {'ignore_failures': False}\n2015-07-08 04:15:45,027 - Group['knox']
{'ignore_failures': False}\n2015-07-08 04:15:45,027 - Group['spark'] {'ignore_failures': False}\n2015-07-08
04:15:45,028 - User['oozie'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['users']}\n2015-07-08
04:15:45,029 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}\n2015-07-08
04:15:45,031 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['users']}\n2015-07-08
04:15:45,032 - User['flume'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}\n2015-07-08
04:15:45,034 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}\n2015-07-08
04:15:45,036 - User['knox'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}\n2015-07-08
04:15:45,037 - User['storm'] {'gid': 'hadoop', 'igno
 re_failures': False, 'groups': ['hadoop']}\n2015-07-08 04:15:45,039 - User['spark'] {'gid':
'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}\n2015-07-08 04:15:45,041 - User['mapred']
{'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}\n2015-07-08 04:15:45,043
- User['accumulo'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}\n2015-07-08
04:15:45,046 - User['hbase'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}\n2015-07-08
04:15:45,048 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['users']}\n2015-07-08
04:15:45,049 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}\n2015-07-08
04:15:45,050 - User['falcon'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['users']}\n2015-07-08
04:15:45,052 - User['sqoop'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': ['hadoop']}\n2015-07-08
04:15:45,054 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups':
  ['hadoop']}\n2015-07-08 04:15:45,056 - User['hcat'] {'gid': 'hadoop', 'ignore_failures':
False, 'groups': ['hadoop']}\n2015-07-08 04:15:45,058 - User['ams'] {'gid': 'hadoop', 'ignore_failures':
False, 'groups': ['hadoop']}\n2015-07-08 04:15:45,059 - User['atlas'] {'gid': 'hadoop', 'ignore_failures':
False, 'groups': ['hadoop']}\n2015-07-08 04:15:45,061 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh']
{'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}\n2015-07-08 04:15:45,064 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh
ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
{'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}\n2015-07-08 04:15:45,117 - Skipping
Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
due to not_if\n2015-07-08 04:15:45,118 - Directory['/tmp/hbase-hbase'] {
 'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}\n2015-07-08 04:15:45,122
- File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'),
'mode': 0555}\n2015-07-08 04:15:45,125 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh
hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test
$(id -u hbase) -gt 1000) || (false)'}\n2015-07-08 04:15:45,178 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh
hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if\n2015-07-08
04:15:45,180 - Group['hdfs'] {'ignore_failures': False}\n2015-07-08 04:15:45,181 - User['hdfs']
{'ignore_failures': False, 'groups': ['hadoop', 'hdfs']}\n2015-07-08 04:15:45,183 - Directory['/etc/hadoop']
{'mode': 0755}\n2015-07-08 04:15:45,210 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh']
{'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}\n2015-
 07-08 04:15:45,235 - Execute['('setenforce', '0')'] {'not_if': '(! which getenforce ) ||
(which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test
-f /selinux/enforce'}\n2015-07-08 04:15:45,393 - Directory['/grid/0/log/hadoop'] {'owner':
'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}\n2015-07-08
04:15:45,397 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive':
True, 'cd_access': 'a'}\n2015-07-08 04:15:45,397 - Changing owner for /var/run/hadoop from
2527 to root\n2015-07-08 04:15:45,397 - Changing group for /var/run/hadoop from 550 to root\n2015-07-08
04:15:45,398 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access':
'a'}\n2015-07-08 04:15:45,406 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties']
{'content': Template('commons-logging.properties.j2'), 'owner': 'root'}\n2015-07-08 04:15:45,410
- File['/usr/hdp/current/hadoop-client/conf/health_check'] {
 'content': Template('health_check.j2'), 'owner': 'root'}\n2015-07-08 04:15:45,411 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties']
{'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}\n2015-07-08 04:15:45,427
- File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'),
'owner': 'hdfs'}\n2015-07-08 04:15:45,428 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties']
{'content': StaticFile('task-log4j.properties'), 'mode': 0755}\n2015-07-08 04:15:45,430 -
File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}\n2015-07-08
04:15:45,438 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content':
Template('topology_mappings.data.j2'), 'group': 'hadoop'}\n2015-07-08 04:15:45,440 - File['/etc/hadoop/conf/topology_script.py']
{'content': StaticFile('topology_script.py'), 'mode': 0755}\n2015-07-08 04:15:45,790 - Directory['
 /etc/security/limits.d'] {'owner': 'root', 'group': 'root', 'recursive': True}\n2015-07-08
04:15:45,801 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'),
'owner': 'root', 'group': 'root', 'mode': 0644}\n2015-07-08 04:15:45,802 - XmlConfig['hadoop-policy.xml']
{'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes':
{}, 'configurations': ...}\n2015-07-08 04:15:45,824 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml\n2015-07-08
04:15:45,824 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs',
'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}\n2015-07-08
04:15:45,840 - Writing File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] because
contents don't match\n2015-07-08 04:15:45,841 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs',
'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', '
 configuration_attributes': {}, 'configurations': ...}\n2015-07-08 04:15:45,859 - Generating
config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml\n2015-07-08 04:15:45,860 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml']
{'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding':
'UTF-8'}\n2015-07-08 04:15:45,870 - Writing File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml']
because contents don't match\n2015-07-08 04:15:45,871 - Directory['/usr/hdp/current/hadoop-client/conf/secure']
{'owner': 'root', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}\n2015-07-08 04:15:45,872
- XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure',
'configuration_attributes': {}, 'configurations': ...}\n2015-07-08 04:15:45,889 - Generating
config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml\n2015-07-08 04:15:45,889
- File['/usr/hdp/current/hadoop-client/con
 f/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop',
'mode': None, 'encoding': 'UTF-8'}\n2015-07-08 04:15:45,899 - Writing File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml']
because contents don't match\n2015-07-08 04:15:45,900 - XmlConfig['ssl-server.xml'] {'owner':
'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes':
{}, 'configurations': ...}\n2015-07-08 04:15:45,916 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml\n2015-07-08
04:15:45,917 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs',
'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}\n2015-07-08
04:15:45,928 - Writing File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] because
contents don't match\n2015-07-08 04:15:45,929 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs',
'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-c
 lient/conf', 'configuration_attributes': {}, 'configurations': ...}\n2015-07-08 04:15:45,948
- Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml\n2015-07-08 04:15:45,949
- File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...),
'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}\n2015-07-08 04:15:46,041 - Writing File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml']
because contents don't match\n2015-07-08 04:15:46,043 - XmlConfig['core-site.xml'] {'group':
'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes':
{}, 'owner': 'hdfs', 'configurations': ...}\n2015-07-08 04:15:46,063 - Generating config:
/usr/hdp/current/hadoop-client/conf/core-site.xml\n2015-07-08 04:15:46,063 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml']
{'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding':
'UTF-8'}\n2015-07-08 04:15:46,114 -
  Writing File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] because contents don't
match\n2015-07-08 04:15:46,118 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content':
Template('slaves.j2'), 'owner': 'root'}\n2015-07-08 04:15:46,119 - Package['hadoop-lzo'] {}\n2015-07-08
04:15:46,305 - Skipping installation of existing package hadoop-lzo\n2015-07-08 04:15:46,305
- Package['lzo'] {}\n2015-07-08 04:15:46,372 - Skipping installation of existing package lzo\n2015-07-08
04:15:46,372 - Package['hadoop-lzo-native'] {}\n2015-07-08 04:15:46,437 - Skipping installation
of existing package hadoop-lzo-native\n2015-07-08 04:15:46,437 - Package['hadooplzo_2_3_*']
{}\n2015-07-08 04:15:46,502 - Skipping installation of existing package hadooplzo_2_3_*\n2015-07-08
04:15:46,503 - Directory['/var/lib/hadoop-hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'mode':
0751, 'recursive': True}\n2015-07-08 04:15:46,552 - Host contains mounts: ['/', '/proc', '/sys',
'/dev/pts', '/dev/shm', '/proc
 /sys/fs/binfmt_misc', '/grid/0', '/grid/1', '/grid/2', '/grid/3'].\n2015-07-08 04:15:46,552
- Mount point for directory /grid/0/hadoop/hdfs/data is /grid/0\n2015-07-08 04:15:46,556 -
Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}\n2015-07-08
04:15:46,558 - Changing owner for /var/run/hadoop from 0 to hdfs\n2015-07-08 04:15:46,558
- Changing group for /var/run/hadoop from 0 to hadoop\n2015-07-08 04:15:46,558 - Directory['/var/run/hadoop/hdfs']
{'owner': 'hdfs', 'recursive': True}\n2015-07-08 04:15:46,559 - Directory['/grid/0/log/hadoop/hdfs']
{'owner': 'hdfs', 'recursive': True}\n2015-07-08 04:15:46,560 - File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid']
{'action': ['delete'], 'not_if': \"ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid
>/dev/null 2>&1 && ps -p `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid`
>/dev/null 2>&1'\"}\n2015-07-08 04:15:46,705 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs
 -datanode.pid']\n2015-07-08 04:15:46,707 - Execute['ambari-sudo.sh  -H -E /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh
--config /usr/hdp/current/hadoop-client/conf start datanode'] {'environment': {'HADOOP_LIBEXEC_DIR':
'/usr/hdp/current/hadoop-client/libexec'}, 'not_if': \"ambari-sudo.sh su hdfs -l -s /bin/bash
-c 'ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 &&
ps -p `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null 2>&1'\"}",{code}
> 
> 
> Diffs
> -----
> 
>   ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py
745a8d4 
>   ambari-server/src/test/python/stacks/2.0.6/HDFS/test_datanode.py b99f53a 
>   ambari-server/src/test/python/stacks/2.0.6/HDFS/test_journalnode.py ecf4b06 
>   ambari-server/src/test/python/stacks/2.0.6/HDFS/test_namenode.py abce658 
>   ambari-server/src/test/python/stacks/2.0.6/HDFS/test_nfsgateway.py a7e507e 
>   ambari-server/src/test/python/stacks/2.0.6/HDFS/test_snamenode.py b5dc82d 
>   ambari-server/src/test/python/stacks/2.0.6/HDFS/test_zkfc.py d3dcaf7 
> 
> Diff: https://reviews.apache.org/r/36352/diff/
> 
> 
> Testing
> -------
> 
> ----------------------------------------------------------------------
> Ran 261 tests in 7.778s
> 
> OK
> ----------------------------------------------------------------------
> Total run:801
> Total errors:0
> Total failures:0
> OK
> 
> 
> Thanks,
> 
> Vitalyi Brodetskyi
> 
>


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message