ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jeremie Gomez (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (AMBARI-10519) Ambari 2.0 stack upgrade HDP 2.2.0.0 => 2.2.4.0 breaks on HDFS HA JournalNode rollEdits: "Access denied for user jn. Superuser privilege is required"
Date Wed, 29 Apr 2015 13:08:07 GMT

    [ https://issues.apache.org/jira/browse/AMBARI-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14519324#comment-14519324
] 

Jeremie Gomez commented on AMBARI-10519:
----------------------------------------

Same problem here. For me, executing hdfs dfsadmin clearly needs superuser and should not
kinit with the jn keytab but with the hdfs keytab.

A solution is to modify the script /var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/params.py
on each host where ambari-agent is installed. 
- line 246 : change the _jn_principal_name conf string to /configurations/hadoop-env/hdfs_principal_name
- line 249 : change the _jn_keytab conf string to /configurations/hadoop-env/hdfs_user_keytab

Solves the problem for me.

> Ambari 2.0 stack upgrade HDP 2.2.0.0 => 2.2.4.0 breaks on HDFS HA JournalNode rollEdits:
"Access denied for user jn. Superuser privilege is required"
> -----------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: AMBARI-10519
>                 URL: https://issues.apache.org/jira/browse/AMBARI-10519
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server, stacks
>    Affects Versions: 2.0.0
>         Environment: HDP 2.2.0.0 => 2.2.4.0
>            Reporter: Hari Sekhon
>         Attachments: errors-5550.txt, output-5550.txt
>
>
> During upgrade of HDP stack 2.2.0.0 => 2.2.4.0 with Ambari 2.0 the procedure fails
with the following error:
> {code}
> 2015-04-16 11:56:02,083 - Ensuring Journalnode quorum is established
> 2015-04-16 11:56:02,083 - u"Execute['/usr/bin/kinit -kt /etc/security/keytabs/jn.service.keytab
jn/lonsl1101978-data-dr.uk.net.intra@LOCALDOMAIN;']" {'user': 'hdfs'}
> 2015-04-16 11:56:07,320 - u"Execute['hdfs dfsadmin -rollEdits']" {'tries': 1, 'user':
'hdfs'}
> 2015-04-16 11:56:13,198 - Error while executing command 'restart':
> Traceback (most recent call last):
>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 214, in execute
>     method(env)
>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 374, in restart
>     self.post_rolling_restart(env)
>   File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/journalnode.py",
line 72, in post_rolling_restart
>     journalnode_upgrade.post_upgrade_check()
>   File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/journalnode_upgrade.py",
line 42, in post_upgrade_check
>     hdfs_roll_edits()
>   File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/journalnode_upgrade.py",
line 83, in hdfs_roll_edits
>     Execute(command, user=params.hdfs_user, tries=1)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148,
in __init__
>     self.env.run()
>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line
152, in run
>     self.run_action(resource, action)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line
118, in run_action
>     provider_action()
>   File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
line 274, in action_run
>     raise ex
> Fail: Execution of 'hdfs dfsadmin -rollEdits' returned 255. rollEdits: Access denied
for user jn. Superuser privilege is required
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSuperuserPrivilege(FSPermissionChecker.java:109)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkSuperuserPrivilege(FSNamesystem.java:6484)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:6338)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEdits(NameNodeRpcServer.java:907)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rollEdits(ClientNamenodeProtocolServerSideTranslatorPB.java:741)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> 2015-04-16 11:56:13,291 - Command: /usr/bin/hdp-select status hadoop-hdfs-journalnode
> /tmp/tmprZ57xv
> Output: hadoop-hdfs-journalnode - 2.2.4.0-2633
> {code}
> Hari Sekhon
> http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message