ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hari Sekhon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (AMBARI-10519) Ambari 2.0 stack upgrade HDP 2.2.0.0 => 2.2.4.0 breaks on HDFS HA JournalNode rollEdits: "Access denied for user jn. Superuser privilege is required"
Date Fri, 17 Apr 2015 15:36:00 GMT

    [ https://issues.apache.org/jira/browse/AMBARI-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14500051#comment-14500051
] 

Hari Sekhon commented on AMBARI-10519:
--------------------------------------

Yes I did notice that originally but actually looking again it does seem it's calling the
commands as the hdfs user but kinit'ing the jn kerberos principal. What I'm not clear on is
whether Ambari is making some assumption around jn that is somehow different in my environment
or whether this is a mistake and it should have been called as the hdfs principal instead.

Having checked again there is actually no jn user either locally or in the LDAP directory
for this cluster, the jn is a service principal in kerberos only, and the org.apache.hadoop.hdfs.qjournal.server.JournalNode
daemons are running as the hdfs user.

Config for krb principal was set by Ambari  and principals were generated using a perl script
using the exported CSV:{code}dfs.journalnode.kerberos.principal = jn/_HOST@LOCALDOMAIN{code}

What's also not clear is what the proper workaround for this should be given that Ambari tries
to fully automate this process and I'm not sure I can stop at that time (or rather let it
fail) and then re-run the command as hdfs and retry to get Ambari to ignore and go past it
as I did in AMBARI-10494 and AMBARI-10518.

> Ambari 2.0 stack upgrade HDP 2.2.0.0 => 2.2.4.0 breaks on HDFS HA JournalNode rollEdits:
"Access denied for user jn. Superuser privilege is required"
> -----------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: AMBARI-10519
>                 URL: https://issues.apache.org/jira/browse/AMBARI-10519
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server, stacks
>    Affects Versions: 2.0.0
>         Environment: HDP 2.2.0.0 => 2.2.4.0
>            Reporter: Hari Sekhon
>            Priority: Blocker
>         Attachments: errors-5550.txt, output-5550.txt
>
>
> During upgrade of HDP stack 2.2.0.0 => 2.2.4.0 with Ambari 2.0 the procedure fails
with the following error:
> {code}
> 2015-04-16 11:56:02,083 - Ensuring Journalnode quorum is established
> 2015-04-16 11:56:02,083 - u"Execute['/usr/bin/kinit -kt /etc/security/keytabs/jn.service.keytab
jn/lonsl1101978-data-dr.uk.net.intra@LOCALDOMAIN;']" {'user': 'hdfs'}
> 2015-04-16 11:56:07,320 - u"Execute['hdfs dfsadmin -rollEdits']" {'tries': 1, 'user':
'hdfs'}
> 2015-04-16 11:56:13,198 - Error while executing command 'restart':
> Traceback (most recent call last):
>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 214, in execute
>     method(env)
>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 374, in restart
>     self.post_rolling_restart(env)
>   File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/journalnode.py",
line 72, in post_rolling_restart
>     journalnode_upgrade.post_upgrade_check()
>   File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/journalnode_upgrade.py",
line 42, in post_upgrade_check
>     hdfs_roll_edits()
>   File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/journalnode_upgrade.py",
line 83, in hdfs_roll_edits
>     Execute(command, user=params.hdfs_user, tries=1)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148,
in __init__
>     self.env.run()
>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line
152, in run
>     self.run_action(resource, action)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line
118, in run_action
>     provider_action()
>   File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
line 274, in action_run
>     raise ex
> Fail: Execution of 'hdfs dfsadmin -rollEdits' returned 255. rollEdits: Access denied
for user jn. Superuser privilege is required
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSuperuserPrivilege(FSPermissionChecker.java:109)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkSuperuserPrivilege(FSNamesystem.java:6484)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:6338)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEdits(NameNodeRpcServer.java:907)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rollEdits(ClientNamenodeProtocolServerSideTranslatorPB.java:741)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> 2015-04-16 11:56:13,291 - Command: /usr/bin/hdp-select status hadoop-hdfs-journalnode
> /tmp/tmprZ57xv
> Output: hadoop-hdfs-journalnode - 2.2.4.0-2633
> {code}
> Hari Sekhon
> http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message