ambari-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dmitry Lysnichenko (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (AMBARI-18800) Rolling Upgrade From HDP 2.5.x to 2.5.y Doesn't hdp-select ZKFC
Date Fri, 04 Nov 2016 15:21:58 GMT

     [ https://issues.apache.org/jira/browse/AMBARI-18800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Dmitry Lysnichenko updated AMBARI-18800:
----------------------------------------
    Status: Patch Available  (was: Open)

> Rolling Upgrade From HDP 2.5.x to 2.5.y Doesn't hdp-select ZKFC
> ---------------------------------------------------------------
>
>                 Key: AMBARI-18800
>                 URL: https://issues.apache.org/jira/browse/AMBARI-18800
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>            Reporter: Dmitry Lysnichenko
>            Assignee: Dmitry Lysnichenko
>         Attachments: AMBARI-18800.patch
>
>
> - Install HDP 2.5.0.0 with HDFS in HA mode
> - Perform a rolling upgrade to 2.5.2.0
> At the end of the upgrade, the pre-finalize step fails with the upgrade failing on 2
hosts for ZKFC. Taking a look at the orchestration, the ZKFC is restarted. However, {{hdp-select}}
is never invoked:
> {noformat}
> 2016-11-02 18:38:15,577 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-hdfs-namenode']
{'timeout': 20}
> 2016-11-02 18:38:15,597 - call returned (0, 'hadoop-hdfs-namenode - 2.5.2.0-67')
> 2016-11-02 18:38:15,598 - hadoop-hdfs-namenode is currently at version 2.5.2.0-67
> 2016-11-02 18:38:15,600 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-hdfs-namenode']
{'timeout': 20}
> 2016-11-02 18:38:15,621 - call returned (0, 'hadoop-hdfs-namenode - 2.5.2.0-67')
> 2016-11-02 18:38:15,621 - hadoop-hdfs-namenode is currently at version 2.5.2.0-67
> 2016-11-02 18:38:15,623 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-hdfs-namenode']
{'timeout': 20}
> 2016-11-02 18:38:15,644 - call returned (0, 'hadoop-hdfs-namenode - 2.5.2.0-67')
> 2016-11-02 18:38:15,644 - hadoop-hdfs-namenode is currently at version 2.5.2.0-67
> 2016-11-02 18:38:15,646 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-hdfs-namenode']
{'timeout': 20}
> 2016-11-02 18:38:15,667 - call returned (0, 'hadoop-hdfs-namenode - 2.5.2.0-67')
> 2016-11-02 18:38:15,668 - hadoop-hdfs-namenode is currently at version 2.5.2.0-67
> 2016-11-02 18:38:15,669 - In the middle of a stack upgrade/downgrade for Stack HDP and
destination version 2.5.2.0-67, determining which hadoop conf dir to use.
> 2016-11-02 18:38:15,669 - Hadoop conf dir: /usr/hdp/2.5.2.0-67/hadoop/conf
> 2016-11-02 18:38:15,669 - The hadoop conf dir /usr/hdp/2.5.2.0-67/hadoop/conf exists,
will call conf-select on it for version 2.5.2.0-67
> 2016-11-02 18:38:15,669 - Checking if need to create versioned conf dir /etc/hadoop/2.5.2.0-67/0
> 2016-11-02 18:38:15,670 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir',
'--package', 'hadoop', '--stack-version', '2.5.2.0-67', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False, 'stderr': -1}
> 2016-11-02 18:38:15,690 - call returned (1, '/etc/hadoop/2.5.2.0-67/0 exist already',
'')
> 2016-11-02 18:38:15,690 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select',
'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.2.0-67', '--conf-version',
'0')] {'logoutput': False, 'sudo': True, 'quiet': False}
> 2016-11-02 18:38:15,716 - checked_call returned (0, '')
> 2016-11-02 18:38:15,716 - Ensuring that hadoop has the correct symlink structure
> 2016-11-02 18:38:15,716 - Using hadoop conf dir: /usr/hdp/2.5.2.0-67/hadoop/conf
> 2016-11-02 18:38:15,718 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-hdfs-namenode']
{'timeout': 20}
> 2016-11-02 18:38:15,738 - call returned (0, 'hadoop-hdfs-namenode - 2.5.2.0-67')
> 2016-11-02 18:38:15,739 - hadoop-hdfs-namenode is currently at version 2.5.2.0-67
> 2016-11-02 18:38:15,826 - In the middle of a stack upgrade/downgrade for Stack HDP and
destination version 2.5.2.0-67, determining which hadoop conf dir to use.
> 2016-11-02 18:38:15,826 - Hadoop conf dir: /usr/hdp/2.5.2.0-67/hadoop/conf
> 2016-11-02 18:38:15,826 - The hadoop conf dir /usr/hdp/2.5.2.0-67/hadoop/conf exists,
will call conf-select on it for version 2.5.2.0-67
> 2016-11-02 18:38:15,826 - Checking if need to create versioned conf dir /etc/hadoop/2.5.2.0-67/0
> 2016-11-02 18:38:15,827 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir',
'--package', 'hadoop', '--stack-version', '2.5.2.0-67', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False, 'stderr': -1}
> 2016-11-02 18:38:15,846 - call returned (1, '/etc/hadoop/2.5.2.0-67/0 exist already',
'')
> 2016-11-02 18:38:15,846 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select',
'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.2.0-67', '--conf-version',
'0')] {'logoutput': False, 'sudo': True, 'quiet': False}
> 2016-11-02 18:38:15,867 - checked_call returned (0, '')
> 2016-11-02 18:38:15,868 - Ensuring that hadoop has the correct symlink structure
> 2016-11-02 18:38:15,869 - Using hadoop conf dir: /usr/hdp/2.5.2.0-67/hadoop/conf
> 2016-11-02 18:38:15,870 - Group['hadoop'] {}
> 2016-11-02 18:38:15,871 - Group['users'] {}
> 2016-11-02 18:38:15,871 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']}
> 2016-11-02 18:38:15,872 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']}
> 2016-11-02 18:38:15,872 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups':
True, 'groups': ['users']}
> 2016-11-02 18:38:15,873 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups':
True, 'groups': ['hadoop']}
> 2016-11-02 18:38:15,873 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['users']}
> 2016-11-02 18:38:15,874 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']}
> 2016-11-02 18:38:15,874 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']}
> 2016-11-02 18:38:15,874 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']}
> 2016-11-02 18:38:15,875 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content':
StaticFile('changeToSecureUid.sh'), 'mode': 0555}
> 2016-11-02 18:38:15,877 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
{'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
> 2016-11-02 18:38:15,881 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
due to not_if
> 2016-11-02 18:38:15,882 - Group['hdfs'] {}
> 2016-11-02 18:38:15,882 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop',
'hdfs']}
> 2016-11-02 18:38:15,883 - FS Type:
> 2016-11-02 18:38:15,883 - Directory['/etc/hadoop'] {'mode': 0755}
> 2016-11-02 18:38:15,899 - File['/usr/hdp/2.5.2.0-67/hadoop/conf/hadoop-env.sh'] {'content':
InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
> 2016-11-02 18:38:15,899 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir']
{'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
> 2016-11-02 18:38:15,916 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce
) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if':
'test -f /selinux/enforce'}
> 2016-11-02 18:38:15,921 - Skipping Execute[('setenforce', '0')] due to not_if
> 2016-11-02 18:38:15,922 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents':
True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
> 2016-11-02 18:38:15,925 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents':
True, 'group': 'root', 'cd_access': 'a'}
> 2016-11-02 18:38:15,925 - Changing owner for /var/run/hadoop from 505 to root
> 2016-11-02 18:38:15,925 - Changing group for /var/run/hadoop from 503 to root
> 2016-11-02 18:38:15,925 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents':
True, 'cd_access': 'a'}
> 2016-11-02 18:38:15,929 - File['/usr/hdp/2.5.2.0-67/hadoop/conf/commons-logging.properties']
{'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
> 2016-11-02 18:38:15,931 - File['/usr/hdp/2.5.2.0-67/hadoop/conf/health_check'] {'content':
Template('health_check.j2'), 'owner': 'hdfs'}
> 2016-11-02 18:38:15,931 - File['/usr/hdp/2.5.2.0-67/hadoop/conf/log4j.properties'] {'content':
..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
> 2016-11-02 18:38:15,942 - File['/usr/hdp/2.5.2.0-67/hadoop/conf/hadoop-metrics2.properties']
{'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
> 2016-11-02 18:38:15,942 - File['/usr/hdp/2.5.2.0-67/hadoop/conf/task-log4j.properties']
{'content': StaticFile('task-log4j.properties'), 'mode': 0755}
> 2016-11-02 18:38:15,943 - File['/usr/hdp/2.5.2.0-67/hadoop/conf/configuration.xsl'] {'owner':
'hdfs', 'group': 'hadoop'}
> 2016-11-02 18:38:15,947 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs',
'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group':
'hadoop'}
> 2016-11-02 18:38:15,951 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'),
'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
> 2016-11-02 18:38:16,154 - In the middle of a stack upgrade/downgrade for Stack HDP and
destination version 2.5.2.0-67, determining which hadoop conf dir to use.
> 2016-11-02 18:38:16,155 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-hdfs-namenode']
{'timeout': 20}
> 2016-11-02 18:38:16,182 - call returned (0, 'hadoop-hdfs-namenode - 2.5.2.0-67')
> 2016-11-02 18:38:16,184 - hadoop-hdfs-namenode is currently at version 2.5.2.0-67
> 2016-11-02 18:38:16,184 - Hadoop conf dir: /usr/hdp/2.5.2.0-67/hadoop/conf
> 2016-11-02 18:38:16,184 - The hadoop conf dir /usr/hdp/2.5.2.0-67/hadoop/conf exists,
will call conf-select on it for version 2.5.2.0-67
> 2016-11-02 18:38:16,185 - Checking if need to create versioned conf dir /etc/hadoop/2.5.2.0-67/0
> 2016-11-02 18:38:16,185 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir',
'--package', 'hadoop', '--stack-version', '2.5.2.0-67', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False, 'stderr': -1}
> 2016-11-02 18:38:16,207 - call returned (1, '/etc/hadoop/2.5.2.0-67/0 exist already',
'')
> 2016-11-02 18:38:16,208 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select',
'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.2.0-67', '--conf-version',
'0')] {'logoutput': False, 'sudo': True, 'quiet': False}
> 2016-11-02 18:38:16,230 - checked_call returned (0, '')
> 2016-11-02 18:38:16,231 - Ensuring that hadoop has the correct symlink structure
> 2016-11-02 18:38:16,231 - Using hadoop conf dir: /usr/hdp/2.5.2.0-67/hadoop/conf
> 2016-11-02 18:38:16,235 - Stack Feature Version Info: stack_version=2.5, version=2.5.2.0-67,
current_cluster_version=2.5.0.0-1245, upgrade_direction=upgrade -> 2.5.2.0-67
> 2016-11-02 18:38:16,237 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-hdfs-namenode']
{'timeout': 20}
> 2016-11-02 18:38:16,259 - call returned (0, 'hadoop-hdfs-namenode - 2.5.2.0-67')
> 2016-11-02 18:38:16,259 - hadoop-hdfs-namenode is currently at version 2.5.2.0-67
> 2016-11-02 18:38:16,261 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-hdfs-namenode']
{'timeout': 20}
> 2016-11-02 18:38:16,282 - call returned (0, 'hadoop-hdfs-namenode - 2.5.2.0-67')
> 2016-11-02 18:38:16,282 - hadoop-hdfs-namenode is currently at version 2.5.2.0-67
> 2016-11-02 18:38:16,284 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-hdfs-namenode']
{'timeout': 20}
> 2016-11-02 18:38:16,319 - call returned (0, 'hadoop-hdfs-namenode - 2.5.2.0-67')
> 2016-11-02 18:38:16,319 - hadoop-hdfs-namenode is currently at version 2.5.2.0-67
> 2016-11-02 18:38:16,321 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-hdfs-namenode']
{'timeout': 20}
> 2016-11-02 18:38:16,343 - call returned (0, 'hadoop-hdfs-namenode - 2.5.2.0-67')
> 2016-11-02 18:38:16,343 - hadoop-hdfs-namenode is currently at version 2.5.2.0-67
> 2016-11-02 18:38:16,344 - In the middle of a stack upgrade/downgrade for Stack HDP and
destination version 2.5.2.0-67, determining which hadoop conf dir to use.
> 2016-11-02 18:38:16,344 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-hdfs-namenode']
{'timeout': 20}
> 2016-11-02 18:38:16,365 - call returned (0, 'hadoop-hdfs-namenode - 2.5.2.0-67')
> 2016-11-02 18:38:16,365 - hadoop-hdfs-namenode is currently at version 2.5.2.0-67
> 2016-11-02 18:38:16,365 - Hadoop conf dir: /usr/hdp/2.5.2.0-67/hadoop/conf
> 2016-11-02 18:38:16,365 - The hadoop conf dir /usr/hdp/2.5.2.0-67/hadoop/conf exists,
will call conf-select on it for version 2.5.2.0-67
> 2016-11-02 18:38:16,365 - Checking if need to create versioned conf dir /etc/hadoop/2.5.2.0-67/0
> 2016-11-02 18:38:16,366 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir',
'--package', 'hadoop', '--stack-version', '2.5.2.0-67', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False, 'stderr': -1}
> 2016-11-02 18:38:16,385 - call returned (1, '/etc/hadoop/2.5.2.0-67/0 exist already',
'')
> 2016-11-02 18:38:16,386 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select',
'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.2.0-67', '--conf-version',
'0')] {'logoutput': False, 'sudo': True, 'quiet': False}
> 2016-11-02 18:38:16,405 - checked_call returned (0, '')
> 2016-11-02 18:38:16,406 - Ensuring that hadoop has the correct symlink structure
> 2016-11-02 18:38:16,406 - Using hadoop conf dir: /usr/hdp/2.5.2.0-67/hadoop/conf
> 2016-11-02 18:38:16,407 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-hdfs-namenode']
{'timeout': 20}
> 2016-11-02 18:38:16,426 - call returned (0, 'hadoop-hdfs-namenode - 2.5.2.0-67')
> 2016-11-02 18:38:16,426 - hadoop-hdfs-namenode is currently at version 2.5.2.0-67
> 2016-11-02 18:38:16,432 - checked_call['rpm -q --queryformat '%{version}-%{release}'
hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1}
> 2016-11-02 18:38:16,444 - checked_call returned (0, '2.5.2.0-67', '')
> 2016-11-02 18:38:16,448 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit
-c unlimited ;  /usr/hdp/2.5.2.0-67/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.5.2.0-67/hadoop/conf
stop zkfc''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/2.5.2.0-67/hadoop/libexec'},
'only_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-zkfc.pid &&
ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-zkfc.pid'}
> 2016-11-02 18:38:21,504 - File['/var/run/hadoop/hdfs/hadoop-hdfs-zkfc.pid'] {'action':
['delete']}
> 2016-11-02 18:38:21,505 - Skipping status check for HDFS service during upgrade
> 2016-11-02 18:38:21,509 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents':
True, 'group': 'root'}
> 2016-11-02 18:38:21,514 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'),
'owner': 'root', 'group': 'root', 'mode': 0644}
> 2016-11-02 18:38:21,515 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop',
'conf_dir': '/usr/hdp/2.5.2.0-67/hadoop/conf', 'configuration_attributes': {}, 'configurations':
...}
> 2016-11-02 18:38:21,524 - Generating config: /usr/hdp/2.5.2.0-67/hadoop/conf/hadoop-policy.xml
> 2016-11-02 18:38:21,524 - File['/usr/hdp/2.5.2.0-67/hadoop/conf/hadoop-policy.xml'] {'owner':
'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
> 2016-11-02 18:38:21,532 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop',
'conf_dir': '/usr/hdp/2.5.2.0-67/hadoop/conf', 'configuration_attributes': {}, 'configurations':
...}
> 2016-11-02 18:38:21,541 - Generating config: /usr/hdp/2.5.2.0-67/hadoop/conf/ssl-client.xml
> 2016-11-02 18:38:21,541 - File['/usr/hdp/2.5.2.0-67/hadoop/conf/ssl-client.xml'] {'owner':
'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
> 2016-11-02 18:38:21,546 - Directory['/usr/hdp/2.5.2.0-67/hadoop/conf/secure'] {'owner':
'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
> 2016-11-02 18:38:21,547 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop',
'conf_dir': '/usr/hdp/2.5.2.0-67/hadoop/conf/secure', 'configuration_attributes': {}, 'configurations':
...}
> 2016-11-02 18:38:21,555 - Generating config: /usr/hdp/2.5.2.0-67/hadoop/conf/secure/ssl-client.xml
> 2016-11-02 18:38:21,555 - File['/usr/hdp/2.5.2.0-67/hadoop/conf/secure/ssl-client.xml']
{'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding':
'UTF-8'}
> 2016-11-02 18:38:21,560 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop',
'conf_dir': '/usr/hdp/2.5.2.0-67/hadoop/conf', 'configuration_attributes': {}, 'configurations':
...}
> 2016-11-02 18:38:21,565 - Generating config: /usr/hdp/2.5.2.0-67/hadoop/conf/ssl-server.xml
> 2016-11-02 18:38:21,565 - File['/usr/hdp/2.5.2.0-67/hadoop/conf/ssl-server.xml'] {'owner':
'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
> 2016-11-02 18:38:21,574 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop',
'conf_dir': '/usr/hdp/2.5.2.0-67/hadoop/conf', 'configuration_attributes': {'final': {'dfs.datanode.failed.volumes.tolerated':
'true', 'dfs.datanode.data.dir': 'true', 'dfs.namenode.name.dir': 'true', 'dfs.support.append':
'true', 'dfs.webhdfs.enabled': 'true'}}, 'configurations': ...}
> 2016-11-02 18:38:21,583 - Generating config: /usr/hdp/2.5.2.0-67/hadoop/conf/hdfs-site.xml
> 2016-11-02 18:38:21,583 - File['/usr/hdp/2.5.2.0-67/hadoop/conf/hdfs-site.xml'] {'owner':
'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
> 2016-11-02 18:38:21,627 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir':
'/usr/hdp/2.5.2.0-67/hadoop/conf', 'mode': 0644, 'configuration_attributes': {'final': {'fs.defaultFS':
'true'}}, 'owner': 'hdfs', 'configurations': ...}
> 2016-11-02 18:38:21,633 - Generating config: /usr/hdp/2.5.2.0-67/hadoop/conf/core-site.xml
> 2016-11-02 18:38:21,633 - File['/usr/hdp/2.5.2.0-67/hadoop/conf/core-site.xml'] {'owner':
'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
> 2016-11-02 18:38:21,651 - File['/usr/hdp/2.5.2.0-67/hadoop/conf/slaves'] {'content':
Template('slaves.j2'), 'owner': 'hdfs'}
> 2016-11-02 18:38:21,652 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop',
'mode': 0755}
> 2016-11-02 18:38:21,652 - Changing owner for /var/run/hadoop from 0 to hdfs
> 2016-11-02 18:38:21,652 - Changing group for /var/run/hadoop from 0 to hadoop
> 2016-11-02 18:38:21,653 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop',
'mode': 0755}
> 2016-11-02 18:38:21,653 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group':
'hadoop', 'create_parents': True}
> 2016-11-02 18:38:21,654 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group':
'hadoop', 'create_parents': True}
> 2016-11-02 18:38:21,654 - File['/var/run/hadoop/hdfs/hadoop-hdfs-zkfc.pid'] {'action':
['delete'], 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-zkfc.pid
&& ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-zkfc.pid'}
> 2016-11-02 18:38:21,661 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit
-c unlimited ;  /usr/hdp/2.5.2.0-67/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.5.2.0-67/hadoop/conf
start zkfc''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/2.5.2.0-67/hadoop/libexec'},
'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-zkfc.pid &&
ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-zkfc.pid'}
> 2016-11-02 18:38:25,735 - Component has started with pid(s): 17307
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message