hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eric Yang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-14375) DataNode cannot serve BlockPool to multiple NameNodes in the different realm
Date Thu, 15 Aug 2019 18:29:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908389#comment-16908389
] 

Eric Yang commented on HDFS-14375:
----------------------------------

[~Jihyun.Cho] The first log line indicates that ipc Server authenticated dn/testhost1.com@TEST1.COM
to access Datanode in dn/testhost1.com@TEST2.COM.  

The problem is the second log line in SecurityAuthorizationManager.  It looks like a wrong
optimization that happened long time ago [on this line|https://github.com/apache/hadoop/blame/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java#L120].
 The original code was comparing [short username|https://github.com/apache/hadoop/commit/c3fdd289cf26fa3bb9c0d2d9f906eba769ddd789#diff-90193e5349be2122d5ed915ba38c957dL123].

The original code ensures dn/testhost1.com@TEST1.COM and dn/testhost2.com@TEST2.COM can both
map to the same user in auth_to_local rules.  The current implementation compares the raw
principals, which skips auth_to_local rule mapping and fail authorization incorrectly.  

> DataNode cannot serve BlockPool to multiple NameNodes in the different realm
> ----------------------------------------------------------------------------
>
>                 Key: HDFS-14375
>                 URL: https://issues.apache.org/jira/browse/HDFS-14375
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: security
>    Affects Versions: 3.1.1
>            Reporter: Jihyun Cho
>            Assignee: Jihyun Cho
>            Priority: Major
>         Attachments: authorize.patch
>
>
> Let me explain the environment for a description.
> {noformat}
> KDC(TEST1.COM) <-- Cross-realm trust -->  KDC(TEST2.COM)
>    |                                         |
> NameNode1                                 NameNode2
>    |                                         |
>    ---------- DataNodes (federated) ----------
> {noformat}
> We configured the secure clusters and federated them.
> * Principal
> ** NameNode1 : nn/_HOST@TEST1.COM 
> ** NameNode2 : nn/_HOST@TEST2.COM 
> ** DataNodes : dn/_HOST@TEST2.COM 
> But DataNodes could not connect to NameNode1 with below error.
> {noformat}
> WARN SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
Authorization failed for dn/hadoop-datanode.test.com@TEST2.COM (auth:KERBEROS) for protocol=interface
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only accessible by
dn/hadoop-datanode.test.com@TEST1.COM
> {noformat}
> We have avoided the error with attached patch.
> The patch checks only using {{username}} and {{hostname}} except {{realm}}.
> I think there is no problem. Because if realms are different and no cross-realm setting,
they cannot communication each other. If you are worried about this, please let me know.
> In the long run, it would be better if I could set multiple realms for authorize. Like
this;
> {noformat}
> <property>
>   <name>dfs.namenode.kerberos.trust-realms</name>
>   <value>TEST1.COM,TEST2.COM</value>
> </property>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message