ambari-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Balázs Bence Sári (JIRA) <j...@apache.org>
Subject [jira] [Updated] (AMBARI-16360) RM fails to start after adding services in Kerb'd cluster
Date Tue, 10 May 2016 09:07:12 GMT

     [ https://issues.apache.org/jira/browse/AMBARI-16360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Balázs Bence Sári updated AMBARI-16360:
---------------------------------------
    Attachment: patch.1

Patch containing fix

> RM fails to start after adding services in Kerb'd cluster
> ---------------------------------------------------------
>
>                 Key: AMBARI-16360
>                 URL: https://issues.apache.org/jira/browse/AMBARI-16360
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-agent
>    Affects Versions: 2.4.0
>            Reporter: Balázs Bence Sári
>            Assignee: Balázs Bence Sári
>            Priority: Critical
>             Fix For: 2.4.0
>
>         Attachments: patch.1
>
>
> Build #220
> 1) single node cluster, HDP 2.4
> 2) hdfs, zk, ams
> 3) enable kerb, mit KDC
> 4) then go to add hive, yarn, tez, storm
> 5) add service wizard fails. RM fails to start.
> 6) wizard exits. I can't start RM at all regardless, same error.
> {code}
> stderr:   /var/lib/ambari-agent/data/errors-121.txt
> Traceback (most recent call last):
>   File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/resourcemanager.py",
line 280, in <module>
>     Resourcemanager().execute()
>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 241, in execute
>     method(env)
>   File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/resourcemanager.py",
line 125, in start
>     self.wait_for_dfs_directories_created(params.entity_groupfs_store_dir, params.entity_groupfs_active_dir)
>   File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/resourcemanager.py",
line 245, in wait_for_dfs_directories_created
>     self.wait_for_dfs_directory_created(dir_path, ignored_dfs_dirs)
>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/decorator.py",
line 55, in wrapper
>     return function(*args, **kwargs)
>   File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/resourcemanager.py",
line 267, in wait_for_dfs_directory_created
>     list_status = util.run_command(dir_path, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'],
assertable_result=False)
>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
line 195, in run_command
>     raise Fail(err_msg)
> resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}'
-X GET --negotiate -u : 'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn''
returned status_code=403. 
> {
>   "RemoteException": {
>     "exception": "SecurityException", 
>     "javaClassName": "java.lang.SecurityException", 
>     "message": "Failed to obtain user group information: java.io.IOException: Usernames
not matched: name=yarn != expected=rm"
>   }
> }
> stdout:   /var/lib/ambari-agent/data/output-121.txt
> 2016-04-06 17:52:33,805 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists,
will call conf-select on it for version 2.4.0.0-169
> 2016-04-06 17:52:33,805 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
> 2016-04-06 17:52:33,805 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir',
'--package', 'hadoop', '--stack-version', '2.4.0.0-169', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False, 'stderr': -1}
> 2016-04-06 17:52:33,915 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already',
'')
> 2016-04-06 17:52:33,915 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select',
'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.0.0-169', '--conf-version',
'0')] {'logoutput': False, 'sudo': True, 'quiet': False}
> 2016-04-06 17:52:33,987 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf
-> /etc/hadoop/2.4.0.0-169/0')
> 2016-04-06 17:52:33,988 - Ensuring that hadoop has the correct symlink structure
> 2016-04-06 17:52:33,988 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
> 2016-04-06 17:52:34,551 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists,
will call conf-select on it for version 2.4.0.0-169
> 2016-04-06 17:52:34,552 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
> 2016-04-06 17:52:34,552 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir',
'--package', 'hadoop', '--stack-version', '2.4.0.0-169', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False, 'stderr': -1}
> 2016-04-06 17:52:34,657 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already',
'')
> 2016-04-06 17:52:34,658 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select',
'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.0.0-169', '--conf-version',
'0')] {'logoutput': False, 'sudo': True, 'quiet': False}
> 2016-04-06 17:52:34,740 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf
-> /etc/hadoop/2.4.0.0-169/0')
> 2016-04-06 17:52:34,741 - Ensuring that hadoop has the correct symlink structure
> 2016-04-06 17:52:34,742 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
> 2016-04-06 17:52:34,742 - Group['hadoop'] {}
> 2016-04-06 17:52:34,748 - Group['users'] {}
> 2016-04-06 17:52:34,748 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']}
> 2016-04-06 17:52:34,750 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups':
True, 'groups': ['hadoop']}
> 2016-04-06 17:52:34,751 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']}
> 2016-04-06 17:52:34,751 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups':
True, 'groups': ['users']}
> 2016-04-06 17:52:34,752 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['users']}
> 2016-04-06 17:52:34,752 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']}
> 2016-04-06 17:52:34,753 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']}
> 2016-04-06 17:52:34,753 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']}
> 2016-04-06 17:52:34,756 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content':
StaticFile('changeToSecureUid.sh'), 'mode': 0555}
> 2016-04-06 17:52:34,767 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
{'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
> 2016-04-06 17:52:34,816 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
due to not_if
> 2016-04-06 17:52:34,816 - Group['hdfs'] {}
> 2016-04-06 17:52:34,817 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop',
'hdfs']}
> 2016-04-06 17:52:34,818 - FS Type: 
> 2016-04-06 17:52:34,818 - Directory['/etc/hadoop'] {'mode': 0755}
> 2016-04-06 17:52:34,860 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content':
InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
> 2016-04-06 17:52:34,864 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir']
{'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
> 2016-04-06 17:52:34,905 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce
) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if':
'test -f /selinux/enforce'}
> 2016-04-06 17:52:34,975 - Skipping Execute[('setenforce', '0')] due to not_if
> 2016-04-06 17:52:34,975 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents':
True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
> 2016-04-06 17:52:34,986 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents':
True, 'group': 'root', 'cd_access': 'a'}
> 2016-04-06 17:52:34,987 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents':
True, 'cd_access': 'a'}
> 2016-04-06 17:52:34,997 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties']
{'content': Template('commons-logging.properties.j2'), 'owner': 'root'}
> 2016-04-06 17:52:34,999 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content':
Template('health_check.j2'), 'owner': 'root'}
> 2016-04-06 17:52:35,000 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties']
{'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
> 2016-04-06 17:52:35,041 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties']
{'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
> 2016-04-06 17:52:35,045 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties']
{'content': StaticFile('task-log4j.properties'), 'mode': 0755}
> 2016-04-06 17:52:35,048 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl']
{'owner': 'hdfs', 'group': 'hadoop'}
> 2016-04-06 17:52:35,053 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs',
'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group':
'hadoop'}
> 2016-04-06 17:52:35,062 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'),
'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
> 2016-04-06 17:52:35,487 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists,
will call conf-select on it for version 2.4.0.0-169
> 2016-04-06 17:52:35,488 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
> 2016-04-06 17:52:35,489 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir',
'--package', 'hadoop', '--stack-version', '2.4.0.0-169', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False, 'stderr': -1}
> 2016-04-06 17:52:35,568 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already',
'')
> 2016-04-06 17:52:35,569 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select',
'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.0.0-169', '--conf-version',
'0')] {'logoutput': False, 'sudo': True, 'quiet': False}
> 2016-04-06 17:52:35,635 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf
-> /etc/hadoop/2.4.0.0-169/0')
> 2016-04-06 17:52:35,636 - Ensuring that hadoop has the correct symlink structure
> 2016-04-06 17:52:35,636 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
> 2016-04-06 17:52:35,637 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-yarn-resourcemanager']
{'timeout': 20}
> 2016-04-06 17:52:35,699 - call returned (0, 'hadoop-yarn-resourcemanager - 2.4.0.0-169')
> 2016-04-06 17:52:35,701 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists,
will call conf-select on it for version 2.4.0.0-169
> 2016-04-06 17:52:35,702 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
> 2016-04-06 17:52:35,702 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir',
'--package', 'hadoop', '--stack-version', '2.4.0.0-169', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False, 'stderr': -1}
> 2016-04-06 17:52:35,768 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already',
'')
> 2016-04-06 17:52:35,769 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select',
'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.4.0.0-169', '--conf-version',
'0')] {'logoutput': False, 'sudo': True, 'quiet': False}
> 2016-04-06 17:52:35,860 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf
-> /etc/hadoop/2.4.0.0-169/0')
> 2016-04-06 17:52:35,861 - Ensuring that hadoop has the correct symlink structure
> 2016-04-06 17:52:35,861 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
> 2016-04-06 17:52:35,883 - Directory['/var/log/hadoop-yarn/nodemanager/recovery-state']
{'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
> 2016-04-06 17:52:35,896 - Directory['/var/run/hadoop-yarn'] {'owner': 'yarn', 'create_parents':
True, 'group': 'hadoop', 'cd_access': 'a'}
> 2016-04-06 17:52:35,899 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'create_parents':
True, 'group': 'hadoop', 'cd_access': 'a'}
> 2016-04-06 17:52:35,900 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group':
'hadoop', 'create_parents': True, 'cd_access': 'a'}
> 2016-04-06 17:52:35,905 - Directory['/var/run/hadoop-mapreduce'] {'owner': 'mapred',
'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
> 2016-04-06 17:52:35,905 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred',
'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
> 2016-04-06 17:52:35,913 - Directory['/var/log/hadoop-mapreduce'] {'owner': 'mapred',
'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
> 2016-04-06 17:52:35,914 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred',
'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
> 2016-04-06 17:52:35,914 - Directory['/var/log/hadoop-yarn'] {'owner': 'yarn', 'ignore_failures':
True, 'create_parents': True, 'cd_access': 'a'}
> 2016-04-06 17:52:35,915 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir':
'/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner':
'hdfs', 'configurations': ...}
> 2016-04-06 17:52:35,947 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
> 2016-04-06 17:52:35,947 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner':
'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
> 2016-04-06 17:52:36,100 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir':
'/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner':
'hdfs', 'configurations': ...}
> 2016-04-06 17:52:36,127 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
> 2016-04-06 17:52:36,127 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner':
'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
> 2016-04-06 17:52:36,336 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir':
'/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner':
'yarn', 'configurations': ...}
> 2016-04-06 17:52:36,374 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml
> 2016-04-06 17:52:36,374 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml']
{'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding':
'UTF-8'}
> 2016-04-06 17:52:36,561 - Changing owner for /usr/hdp/current/hadoop-client/conf/mapred-site.xml
from 1005 to yarn
> 2016-04-06 17:52:36,561 - XmlConfig['yarn-site.xml'] {'group': 'hadoop', 'conf_dir':
'/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner':
'yarn', 'configurations': ...}
> 2016-04-06 17:52:36,590 - Generating config: /usr/hdp/current/hadoop-client/conf/yarn-site.xml
> 2016-04-06 17:52:36,590 - File['/usr/hdp/current/hadoop-client/conf/yarn-site.xml'] {'owner':
'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
> 2016-04-06 17:52:36,998 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir':
'/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner':
'yarn', 'configurations': ...}
> 2016-04-06 17:52:37,026 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml
> 2016-04-06 17:52:37,030 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml']
{'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding':
'UTF-8'}
> 2016-04-06 17:52:37,088 - Changing owner for /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml
from 505 to yarn
> 2016-04-06 17:52:37,088 - File['/var/log/hadoop-yarn/yarn/hadoop-mapreduce.jobsummary.log']
{'owner': 'yarn', 'group': 'hadoop'}
> 2016-04-06 17:52:37,090 - Writing File['/var/log/hadoop-yarn/yarn/hadoop-mapreduce.jobsummary.log']
because it doesn't exist
> 2016-04-06 17:52:37,093 - Changing owner for /var/log/hadoop-yarn/yarn/hadoop-mapreduce.jobsummary.log
from 0 to yarn
> 2016-04-06 17:52:37,094 - Changing group for /var/log/hadoop-yarn/yarn/hadoop-mapreduce.jobsummary.log
from 0 to hadoop
> 2016-04-06 17:52:37,094 - File['/etc/hadoop/conf/yarn.exclude'] {'owner': 'yarn', 'group':
'hadoop'}
> 2016-04-06 17:52:37,106 - File['/etc/security/limits.d/yarn.conf'] {'content': Template('yarn.conf.j2'),
'mode': 0644}
> 2016-04-06 17:52:37,111 - File['/etc/security/limits.d/mapreduce.conf'] {'content': Template('mapreduce.conf.j2'),
'mode': 0644}
> 2016-04-06 17:52:37,152 - File['/usr/hdp/current/hadoop-client/conf/yarn-env.sh'] {'content':
InlineTemplate(...), 'owner': 'yarn', 'group': 'hadoop', 'mode': 0755}
> 2016-04-06 17:52:37,153 - Writing File['/usr/hdp/current/hadoop-client/conf/yarn-env.sh']
because contents don't match
> 2016-04-06 17:52:37,174 - File['/usr/hdp/current/hadoop-yarn-resourcemanager/bin/container-executor']
{'group': 'hadoop', 'mode': 06050}
> 2016-04-06 17:52:37,184 - File['/usr/hdp/current/hadoop-client/conf/container-executor.cfg']
{'content': Template('container-executor.cfg.j2'), 'group': 'hadoop', 'mode': 0644}
> 2016-04-06 17:52:37,193 - Directory['/cgroups_test/cpu'] {'group': 'hadoop', 'create_parents':
True, 'mode': 0755, 'cd_access': 'a'}
> 2016-04-06 17:52:37,200 - File['/usr/hdp/current/hadoop-client/conf/mapred-env.sh'] {'content':
InlineTemplate(...), 'owner': 'root', 'mode': 0755}
> 2016-04-06 17:52:37,200 - File['/usr/hdp/current/hadoop-client/sbin/task-controller']
{'owner': 'root', 'group': 'hadoop', 'mode': 06050}
> 2016-04-06 17:52:37,207 - File['/usr/hdp/current/hadoop-client/conf/taskcontroller.cfg']
{'content': Template('taskcontroller.cfg.j2'), 'owner': 'root', 'group': 'hadoop', 'mode':
0644}
> 2016-04-06 17:52:37,207 - XmlConfig['mapred-site.xml'] {'owner': 'mapred', 'group': 'hadoop',
'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations':
...}
> 2016-04-06 17:52:37,264 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml
> 2016-04-06 17:52:37,264 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml']
{'owner': 'mapred', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding':
'UTF-8'}
> 2016-04-06 17:52:37,547 - Changing owner for /usr/hdp/current/hadoop-client/conf/mapred-site.xml
from 1004 to mapred
> 2016-04-06 17:52:37,547 - XmlConfig['capacity-scheduler.xml'] {'owner': 'hdfs', 'group':
'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {},
'configurations': ...}
> 2016-04-06 17:52:37,653 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml
> 2016-04-06 17:52:37,653 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml']
{'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding':
'UTF-8'}
> 2016-04-06 17:52:37,771 - Changing owner for /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml
from 1004 to hdfs
> 2016-04-06 17:52:37,772 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop',
'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations':
...}
> 2016-04-06 17:52:37,838 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
> 2016-04-06 17:52:37,838 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml']
{'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding':
'UTF-8'}
> 2016-04-06 17:52:37,894 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner':
'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
> 2016-04-06 17:52:37,910 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop',
'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {},
'configurations': ...}
> 2016-04-06 17:52:37,958 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
> 2016-04-06 17:52:37,958 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml']
{'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding':
'UTF-8'}
> 2016-04-06 17:52:37,995 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop',
'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations':
...}
> 2016-04-06 17:52:38,071 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
> 2016-04-06 17:52:38,072 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml']
{'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding':
'UTF-8'}
> 2016-04-06 17:52:38,098 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml.example']
{'owner': 'mapred', 'group': 'hadoop'}
> 2016-04-06 17:52:38,099 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml.example']
{'owner': 'mapred', 'group': 'hadoop'}
> 2016-04-06 17:52:38,099 - Verifying DFS directories where ATS stores time line data for
active and completed applications.
> 2016-04-06 17:52:38,104 - Execute['/usr/bin/kinit -kt /etc/security/keytabs/rm.service.keytab
rm/c6401.ambari.apache.org@EXAMPLE.COM;'] {'user': 'yarn'}
> 2016-04-06 17:52:38,404 - Verifying if DFS directory '/ats/done/' exists.
> 2016-04-06 17:52:38,406 - call['ambari-sudo.sh su yarn -l -s /bin/bash -c 'curl -sS -L
-w '"'"'%{http_code}'"'"' -X GET --negotiate -u : '"'"'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn'"'"'
1>/tmp/tmpTOXEBI 2>/tmp/tmpwHIFI_''] {'logoutput': None, 'quiet': False}
> 2016-04-06 17:52:38,572 - call returned (0, '')
> 2016-04-06 17:52:38,573 - Will retry 7 time(s), caught exception: Execution of 'curl
-sS -L -w '%{http_code}' -X GET --negotiate -u : 'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn''
returned status_code=403. 
> {
>   "RemoteException": {
>     "exception": "SecurityException", 
>     "javaClassName": "java.lang.SecurityException", 
>     "message": "Failed to obtain user group information: java.io.IOException: Usernames
not matched: name=yarn != expected=rm"
>   }
> }. Sleeping for 20 sec(s)
> 2016-04-06 17:52:58,592 - Verifying if DFS directory '/ats/done/' exists.
> 2016-04-06 17:52:58,594 - call['ambari-sudo.sh su yarn -l -s /bin/bash -c 'curl -sS -L
-w '"'"'%{http_code}'"'"' -X GET --negotiate -u : '"'"'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn'"'"'
1>/tmp/tmpenjO1U 2>/tmp/tmp7i9TPL''] {'logoutput': None, 'quiet': False}
> 2016-04-06 17:52:58,696 - call returned (0, '')
> 2016-04-06 17:52:58,696 - Will retry 6 time(s), caught exception: Execution of 'curl
-sS -L -w '%{http_code}' -X GET --negotiate -u : 'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn''
returned status_code=403. 
> {
>   "RemoteException": {
>     "exception": "SecurityException", 
>     "javaClassName": "java.lang.SecurityException", 
>     "message": "Failed to obtain user group information: java.io.IOException: Usernames
not matched: name=yarn != expected=rm"
>   }
> }. Sleeping for 20 sec(s)
> 2016-04-06 17:53:18,717 - Verifying if DFS directory '/ats/done/' exists.
> 2016-04-06 17:53:18,718 - call['ambari-sudo.sh su yarn -l -s /bin/bash -c 'curl -sS -L
-w '"'"'%{http_code}'"'"' -X GET --negotiate -u : '"'"'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn'"'"'
1>/tmp/tmpAfX3Cp 2>/tmp/tmpzb3MbZ''] {'logoutput': None, 'quiet': False}
> 2016-04-06 17:53:18,901 - call returned (0, '')
> 2016-04-06 17:53:18,912 - Will retry 5 time(s), caught exception: Execution of 'curl
-sS -L -w '%{http_code}' -X GET --negotiate -u : 'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn''
returned status_code=403. 
> {
>   "RemoteException": {
>     "exception": "SecurityException", 
>     "javaClassName": "java.lang.SecurityException", 
>     "message": "Failed to obtain user group information: java.io.IOException: Usernames
not matched: name=yarn != expected=rm"
>   }
> }. Sleeping for 20 sec(s)
> 2016-04-06 17:53:38,933 - Verifying if DFS directory '/ats/done/' exists.
> 2016-04-06 17:53:38,935 - call['ambari-sudo.sh su yarn -l -s /bin/bash -c 'curl -sS -L
-w '"'"'%{http_code}'"'"' -X GET --negotiate -u : '"'"'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn'"'"'
1>/tmp/tmplrXWp3 2>/tmp/tmpO6cD5t''] {'logoutput': None, 'quiet': False}
> 2016-04-06 17:53:39,027 - call returned (0, '')
> 2016-04-06 17:53:39,028 - Will retry 4 time(s), caught exception: Execution of 'curl
-sS -L -w '%{http_code}' -X GET --negotiate -u : 'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn''
returned status_code=403. 
> {
>   "RemoteException": {
>     "exception": "SecurityException", 
>     "javaClassName": "java.lang.SecurityException", 
>     "message": "Failed to obtain user group information: java.io.IOException: Usernames
not matched: name=yarn != expected=rm"
>   }
> }. Sleeping for 20 sec(s)
> 2016-04-06 17:53:59,047 - Verifying if DFS directory '/ats/done/' exists.
> 2016-04-06 17:53:59,048 - call['ambari-sudo.sh su yarn -l -s /bin/bash -c 'curl -sS -L
-w '"'"'%{http_code}'"'"' -X GET --negotiate -u : '"'"'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn'"'"'
1>/tmp/tmpxNBcDc 2>/tmp/tmpSCX8Wk''] {'logoutput': None, 'quiet': False}
> 2016-04-06 17:53:59,095 - call returned (0, '')
> 2016-04-06 17:53:59,096 - Will retry 3 time(s), caught exception: Execution of 'curl
-sS -L -w '%{http_code}' -X GET --negotiate -u : 'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn''
returned status_code=403. 
> {
>   "RemoteException": {
>     "exception": "SecurityException", 
>     "javaClassName": "java.lang.SecurityException", 
>     "message": "Failed to obtain user group information: java.io.IOException: Usernames
not matched: name=yarn != expected=rm"
>   }
> }. Sleeping for 20 sec(s)
> 2016-04-06 17:54:19,116 - Verifying if DFS directory '/ats/done/' exists.
> 2016-04-06 17:54:19,117 - call['ambari-sudo.sh su yarn -l -s /bin/bash -c 'curl -sS -L
-w '"'"'%{http_code}'"'"' -X GET --negotiate -u : '"'"'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn'"'"'
1>/tmp/tmplZ3LST 2>/tmp/tmpjtCElc''] {'logoutput': None, 'quiet': False}
> 2016-04-06 17:54:19,177 - call returned (0, '')
> 2016-04-06 17:54:19,178 - Will retry 2 time(s), caught exception: Execution of 'curl
-sS -L -w '%{http_code}' -X GET --negotiate -u : 'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn''
returned status_code=403. 
> {
>   "RemoteException": {
>     "exception": "SecurityException", 
>     "javaClassName": "java.lang.SecurityException", 
>     "message": "Failed to obtain user group information: java.io.IOException: Usernames
not matched: name=yarn != expected=rm"
>   }
> }. Sleeping for 20 sec(s)
> 2016-04-06 17:54:39,198 - Verifying if DFS directory '/ats/done/' exists.
> 2016-04-06 17:54:39,199 - call['ambari-sudo.sh su yarn -l -s /bin/bash -c 'curl -sS -L
-w '"'"'%{http_code}'"'"' -X GET --negotiate -u : '"'"'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn'"'"'
1>/tmp/tmpsZoCji 2>/tmp/tmpUvPE7K''] {'logoutput': None, 'quiet': False}
> 2016-04-06 17:54:39,273 - call returned (0, '')
> 2016-04-06 17:54:39,274 - Will retry 1 time(s), caught exception: Execution of 'curl
-sS -L -w '%{http_code}' -X GET --negotiate -u : 'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn''
returned status_code=403. 
> {
>   "RemoteException": {
>     "exception": "SecurityException", 
>     "javaClassName": "java.lang.SecurityException", 
>     "message": "Failed to obtain user group information: java.io.IOException: Usernames
not matched: name=yarn != expected=rm"
>   }
> }. Sleeping for 20 sec(s)
> 2016-04-06 17:54:59,289 - Verifying if DFS directory '/ats/done/' exists.
> 2016-04-06 17:54:59,291 - call['ambari-sudo.sh su yarn -l -s /bin/bash -c 'curl -sS -L
-w '"'"'%{http_code}'"'"' -X GET --negotiate -u : '"'"'http://c6401.ambari.apache.org:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=yarn'"'"'
1>/tmp/tmpzuHWuQ 2>/tmp/tmpobAQNa''] {'logoutput': None, 'quiet': False}
> 2016-04-06 17:54:59,396 - call returned (0, '')
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message