hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jeff W (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-11500) Java Unlimited Cryptography Extension doesn't work
Date Sun, 05 Mar 2017 22:25:32 GMT

     [ https://issues.apache.org/jira/browse/HDFS-11500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jeff W updated HDFS-11500:
--------------------------
    Description: 
I'm setting up my HDFS cluster with JQM service, and enabled Kerberos authentication. replaced
JCE to unlimited extension, but I still got the following error information from the Journal
node:
-----------------
    2017-03-04 15:31:39,566 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful
for nn/hdfs-nn1.cloud.local@CLOUD.LOCAL (auth:KERBEROS)
    2017-03-04 15:31:39,585 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
Authorization successful for nn/hdfs-nn1.cloud.local@CLOUD.LOCAL (auth:KERBEROS) for protocol=interface
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol
    ...
    2017-03-04 15:31:40,345 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter:
Authentication exception: GSSException: Failure unspecified at GSS-API level (Mechanism level:
Invalid argument (400) - Cannot find key of appropriate type to decrypt AP REP - AES256 CTS
mode with HMAC SHA1-96)
----------------
 
IPC authentication for Namenode looks good, but following with AuthenticationFilter exception
came from HTTP authentication, as I'm able to see the same error message appears in Namenode's
log:
-----------------
    2017-03-04 15:31:40,402 ERROR org.apache.hadoop.hdfs.server.namenode.EditLogInputStream:
caught exception initializing https://jn1.cloud.local:8481/getJournal?jid=hadoop1&segmentTxId=1&storageInfo=-63%3A1598677718%3A0%3ACID-2eecd392-dae7-480b-a867-5e0295c78648
    java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException:
Authentication failed, status: 403, message: GSSException: Failure unspecified at GSS-API
level (Mechanism level: Invalid argument (400) - Cannot find key of appropriate type to decrypt
AP REP - AES256 CTS mode with HMAC SHA1-96)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:464)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:456)
        at java.security.AccessController.doPrivileged(Native Method)
    ...
    Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: Authentication
failed, status: 403, message: GSSException: Failure unspecified at GSS-API level (Mechanism
level: Invalid argument (400) - Cannot find key of appropriate type to decrypt AP REP - AES256
CTS mode with HMAC SHA1-96)
        at org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:274)
        at org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
        at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:214)
        at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
        at org.apache.hadoop.hdfs.web.URLConnectionFactory.openConnection(URLConnectionFactory.java:161)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:461)
        ... 30 more
    2017-03-04 20:06:51,612 ERROR org.apache.hadoop.hdfs.server.namenode.EditLogInputStream:
Got error reading edit log input stream https://jn1.cloud.local:8481/getJournal?jid=hadoop1&segmentTxId=1&storageInfo=-63%3A1598677718%3A0%3ACID-2eecd392-dae7-480b-a867-5e0295c78648;
failing over to edit log https://jn2.cloud.local:8481/getJournal?jid=hadoop1&segmentTxId=1&storageInfo=-63%3A1598677718%3A0%3ACID-2eecd392-dae7-480b-a867-5e0295c78648
    org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream$PrematureEOFException:
got premature end-of-file at txid 0; expected file to go up to 2
        at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:194)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
        at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
        ....
----------------
    
AES256 enabled keys are involved in both machines:
-----------------
    $ klist -kte hdfs.keytab | grep 'aes256'
    5 12/31/1969 19:00:00 jn/jn1.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    4 12/31/1969 19:00:00 jn/jn2.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    4 12/31/1969 19:00:00 jn/jn3.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    4 12/31/1969 19:00:00 nn/hdfs-nn1.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    4 12/31/1969 19:00:00 nn/hdfs-nn2.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    3 12/31/1969 19:00:00 dn/hdfs-dn1.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    3 12/31/1969 19:00:00 dn/hdfs-dn2.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    3 12/31/1969 19:00:00 dn/hdfs-dn3.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    48 12/31/1969 19:00:00 hdfs@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
----------------
    
    
keytab export command:
-----------------
    > ktpass /princ jn/jn1.cloud.local@CLOUD.LOCAL /mapuser jn1@cloud.local /rndpass /crypto
all /ptype KRB5_NT_PRINCIPAL /mapop set /out hdfs.keytab /in hdfs.keytab
    ...
    > ktpass /princ nn/hdfs-nn1.cloud.local@CLOUD.LOCAL /mapuser nn1@cloud.local /rndpass
/crypto all /ptype KRB5_NT_PRINCIPAL /mapop set /out hdfs.keytab /in hdfs.keytab
    ...
    > ktpass /princ dn/hdfs-dn1.cloud.local@CLOUD.LOCAL /mapuser dn1@cloud.local /rndpass
/crypto all /ptype KRB5_NT_PRINCIPAL /mapop set /out hdfs.keytab /in hdfs.keytab
    ...
    > ktpass /princ hdfs@CLOUD.LOCAL /mapuser hdfs@cloud.local /rndpass /crypto all /ptype
KRB5_NT_PRINCIPAL /mapop set /out hdfs.keytab /in hdfs.keytab
----------------
    
    
JCE information:
-----------------
    Max allowed key length for AES: 2147483647
    SUN 1.8
    SunRsaSign 1.8
    SunEC 1.8
    SunJSSE 1.8
    SunJCE 1.8
    SunJGSS 1.8
    SunSASL 1.8
    XMLDSig 1.8
    SunPCSC 1.8
----------------
    
    
Configuration for core-site.xml:
-----------------
    <property>
        <name>hadoop.security.authentication</name>
        <value>kerberos</value>
    </property>

    <property>
        <name>hadoop.security.authorization</name>
        <value>true</value>
    </property>
----------------
    
    
And hdfs-site.xml:
-----------------
    <property>
        <name>dfs.journalnode.keytab.file</name>
        <value>/opt/hdfs/default/etc/hadoop/hdfs.keytab</value>
     </property>

    <property>
        <name>dfs.journalnode.kerberos.principal</name>
        <value>jn/_HOST@CLOUD.LOCAL</value>
    </property>
    ...
    <property>
        <description> path to the HDFS keytab </description>
        <name>dfs.namenode.keytab.file</name>
        <value>/opt/hdfs/default/etc/hadoop/hdfs.keytab</value>
    </property>

    <property>
        <description>Kerberos principal name for the NameNode.</description>
        <name>dfs.namenode.kerberos.principal</name>
        <value>nn/_HOST@CLOUD.LOCAL</value>
    </property>
----------------
    
    
JAAS configuration:
-----------------
    Client \{
        com.sun.security.auth.module.Krb5LoginModule required
        useKeyTab=true
        keyTab="/opt/hdfs/default/etc/hadoop/hdfs.keytab"
        storeKey=true
        useTicketCache=false
        principal="hdfs@CLOUD.LOCAL";
    \};
----------------
    
    
krb5.conf file is below:
-----------------
    $ cat /etc/krb5.conf
    \[logging\]
    default = FILE:/var/log/krb5libs.log
    kdc = FILE:/var/log/krb5kdc.log
    admin_server = FILE:/var/log/kadmind.log
    
    \[libdefaults\]
    dns_lookup_realm = false
    ticket_lifetime = 24h
    renew_lifetime = 7d
    forwardable = true
    rdns = false
    default_realm = CLOUD.LOCAL
    \# default_ccache_name = KEYRING:persistent:%\{uid}
    
    default_tkt_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 rc4-hmac
    default_tgs_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 rc4-hmac
    permitted_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 rc4-hmac

    \[realms\]
    CLOUD.LOCAL = {
        kdc = dc.cloud.local
        admin_server = dc.cloud.local
    }
    
    \[domain_realm\]
    .cloud.local = CLOUD.LOCAL
    cloud.local = CLOUD.LOCAL
----------------
    
    
I'm using AD as Kerberos server. "Use Kerberos DES encryption types for this account" is checked
for all the accounts. Current logon user:
------------------
    $ klist
    Ticket cache: KEYRING:persistent:1179001683:krb_ccache_l7eifu1
    Default principal: hdfs@CLOUD.LOCAL
    
    Valid starting       Expires              Service principal
    03/04/2017 17:41:13  03/05/2017 03:41:13  krbtgt/CLOUD.LOCAL@CLOUD.LOCAL
        renew until 03/11/2017 17:41:13
----------------

    
    
Is there any thing else I should pay attention?

  was:
I'm setting up my HDFS cluster with JQM service, and enabled Kerberos authentication. replaced
JCE to unlimited extension, but I still got the following error information from the Journal
node:
-----------------
    2017-03-04 15:31:39,566 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful
for nn/hdfs-nn1.cloud.local@CLOUD.LOCAL (auth:KERBEROS)
    2017-03-04 15:31:39,585 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
Authorization successful for nn/hdfs-nn1.cloud.local@CLOUD.LOCAL (auth:KERBEROS) for protocol=interface
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol
    ...
    2017-03-04 15:31:40,345 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter:
Authentication exception: GSSException: Failure unspecified at GSS-API level (Mechanism level:
Invalid argument (400) - Cannot find key of appropriate type to decrypt AP REP - AES256 CTS
mode with HMAC SHA1-96)
    
    
IPC authentication for Namenode looks good, but following with AuthenticationFilter exception
came from HTTP authentication, as I'm able to see the same error message appears in Namenode's
log:
-----------------
    2017-03-04 15:31:40,402 ERROR org.apache.hadoop.hdfs.server.namenode.EditLogInputStream:
caught exception initializing https://jn1.cloud.local:8481/getJournal?jid=hadoop1&segmentTxId=1&storageInfo=-63%3A1598677718%3A0%3ACID-2eecd392-dae7-480b-a867-5e0295c78648
    java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException:
Authentication failed, status: 403, message: GSSException: Failure unspecified at GSS-API
level (Mechanism level: Invalid argument (400) - Cannot find key of appropriate type to decrypt
AP REP - AES256 CTS mode with HMAC SHA1-96)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:464)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:456)
        at java.security.AccessController.doPrivileged(Native Method)
    ...
    Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: Authentication
failed, status: 403, message: GSSException: Failure unspecified at GSS-API level (Mechanism
level: Invalid argument (400) - Cannot find key of appropriate type to decrypt AP REP - AES256
CTS mode with HMAC SHA1-96)
        at org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:274)
        at org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
        at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:214)
        at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
        at org.apache.hadoop.hdfs.web.URLConnectionFactory.openConnection(URLConnectionFactory.java:161)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:461)
        ... 30 more
    2017-03-04 20:06:51,612 ERROR org.apache.hadoop.hdfs.server.namenode.EditLogInputStream:
Got error reading edit log input stream https://jn1.cloud.local:8481/getJournal?jid=hadoop1&segmentTxId=1&storageInfo=-63%3A1598677718%3A0%3ACID-2eecd392-dae7-480b-a867-5e0295c78648;
failing over to edit log https://jn2.cloud.local:8481/getJournal?jid=hadoop1&segmentTxId=1&storageInfo=-63%3A1598677718%3A0%3ACID-2eecd392-dae7-480b-a867-5e0295c78648
    org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream$PrematureEOFException:
got premature end-of-file at txid 0; expected file to go up to 2
        at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:194)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
        at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
        ....
    
    
AES256 enabled keys are involved in both machines:
-----------------
    $ klist -kte hdfs.keytab | grep 'aes256'
    5 12/31/1969 19:00:00 jn/jn1.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    4 12/31/1969 19:00:00 jn/jn2.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    4 12/31/1969 19:00:00 jn/jn3.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    4 12/31/1969 19:00:00 nn/hdfs-nn1.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    4 12/31/1969 19:00:00 nn/hdfs-nn2.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    3 12/31/1969 19:00:00 dn/hdfs-dn1.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    3 12/31/1969 19:00:00 dn/hdfs-dn2.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    3 12/31/1969 19:00:00 dn/hdfs-dn3.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    48 12/31/1969 19:00:00 hdfs@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
    
    
keytab export command:
-----------------
    > ktpass /princ jn/jn1.cloud.local@CLOUD.LOCAL /mapuser jn1@cloud.local /rndpass /crypto
all /ptype KRB5_NT_PRINCIPAL /mapop set /out hdfs.keytab /in hdfs.keytab
    ...
    > ktpass /princ nn/hdfs-nn1.cloud.local@CLOUD.LOCAL /mapuser nn1@cloud.local /rndpass
/crypto all /ptype KRB5_NT_PRINCIPAL /mapop set /out hdfs.keytab /in hdfs.keytab
    ...
    > ktpass /princ dn/hdfs-dn1.cloud.local@CLOUD.LOCAL /mapuser dn1@cloud.local /rndpass
/crypto all /ptype KRB5_NT_PRINCIPAL /mapop set /out hdfs.keytab /in hdfs.keytab
    ...
    > ktpass /princ hdfs@CLOUD.LOCAL /mapuser hdfs@cloud.local /rndpass /crypto all /ptype
KRB5_NT_PRINCIPAL /mapop set /out hdfs.keytab /in hdfs.keytab
    
    
JCE information:
-----------------
    Max allowed key length for AES: 2147483647
    SUN 1.8
    SunRsaSign 1.8
    SunEC 1.8
    SunJSSE 1.8
    SunJCE 1.8
    SunJGSS 1.8
    SunSASL 1.8
    XMLDSig 1.8
    SunPCSC 1.8
    
    
Configuration for core-site.xml:
-----------------
    <property>
        <name>hadoop.security.authentication</name>
        <value>kerberos</value>
    </property>

    <property>
        <name>hadoop.security.authorization</name>
        <value>true</value>
    </property>
    
    
And hdfs-site.xml:
-----------------
    <property>
        <name>dfs.journalnode.keytab.file</name>
        <value>/opt/hdfs/default/etc/hadoop/hdfs.keytab</value>
     </property>

    <property>
        <name>dfs.journalnode.kerberos.principal</name>
        <value>jn/_HOST@CLOUD.LOCAL</value>
    </property>
    ...
    <property>
        <description> path to the HDFS keytab </description>
        <name>dfs.namenode.keytab.file</name>
        <value>/opt/hdfs/default/etc/hadoop/hdfs.keytab</value>
    </property>

    <property>
        <description>Kerberos principal name for the NameNode.</description>
        <name>dfs.namenode.kerberos.principal</name>
        <value>nn/_HOST@CLOUD.LOCAL</value>
    </property>
    
    
JAAS configuration:
-----------------
    Client \{
        com.sun.security.auth.module.Krb5LoginModule required
        useKeyTab=true
        keyTab="/opt/hdfs/default/etc/hadoop/hdfs.keytab"
        storeKey=true
        useTicketCache=false
        principal="hdfs@CLOUD.LOCAL";
    \};
    
    
krb5.conf file is below:
-----------------
    $ cat /etc/krb5.conf
    \[logging\]
    default = FILE:/var/log/krb5libs.log
    kdc = FILE:/var/log/krb5kdc.log
    admin_server = FILE:/var/log/kadmind.log
    
    \[libdefaults\]
    dns_lookup_realm = false
    ticket_lifetime = 24h
    renew_lifetime = 7d
    forwardable = true
    rdns = false
    default_realm = CLOUD.LOCAL
    \# default_ccache_name = KEYRING:persistent:%\{uid}
    
    default_tkt_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 rc4-hmac
    default_tgs_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 rc4-hmac
    permitted_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 rc4-hmac

    \[realms\]
    CLOUD.LOCAL = {
        kdc = dc.cloud.local
        admin_server = dc.cloud.local
    }
    
    \[domain_realm\]
    .cloud.local = CLOUD.LOCAL
    cloud.local = CLOUD.LOCAL
    
    
    
I'm using AD as Kerberos server. "Use Kerberos DES encryption types for this account" is checked
for all the accounts. Current logon user:
------------------
    $ klist
    Ticket cache: KEYRING:persistent:1179001683:krb_ccache_l7eifu1
    Default principal: hdfs@CLOUD.LOCAL
    
    Valid starting       Expires              Service principal
    03/04/2017 17:41:13  03/05/2017 03:41:13  krbtgt/CLOUD.LOCAL@CLOUD.LOCAL
        renew until 03/11/2017 17:41:13

    
    
Is there any thing else I should pay attention?


> Java Unlimited Cryptography Extension doesn't work
> --------------------------------------------------
>
>                 Key: HDFS-11500
>                 URL: https://issues.apache.org/jira/browse/HDFS-11500
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: journal-node, namenode
>    Affects Versions: 2.7.3
>         Environment: Centos 7, Java 1.8, HDFS HA
>            Reporter: Jeff W
>              Labels: security
>
> I'm setting up my HDFS cluster with JQM service, and enabled Kerberos authentication.
replaced JCE to unlimited extension, but I still got the following error information from
the Journal node:
> -----------------
>     2017-03-04 15:31:39,566 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful
for nn/hdfs-nn1.cloud.local@CLOUD.LOCAL (auth:KERBEROS)
>     2017-03-04 15:31:39,585 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
Authorization successful for nn/hdfs-nn1.cloud.local@CLOUD.LOCAL (auth:KERBEROS) for protocol=interface
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol
>     ...
>     2017-03-04 15:31:40,345 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter:
Authentication exception: GSSException: Failure unspecified at GSS-API level (Mechanism level:
Invalid argument (400) - Cannot find key of appropriate type to decrypt AP REP - AES256 CTS
mode with HMAC SHA1-96)
> ----------------
>  
> IPC authentication for Namenode looks good, but following with AuthenticationFilter exception
came from HTTP authentication, as I'm able to see the same error message appears in Namenode's
log:
> -----------------
>     2017-03-04 15:31:40,402 ERROR org.apache.hadoop.hdfs.server.namenode.EditLogInputStream:
caught exception initializing https://jn1.cloud.local:8481/getJournal?jid=hadoop1&segmentTxId=1&storageInfo=-63%3A1598677718%3A0%3ACID-2eecd392-dae7-480b-a867-5e0295c78648
>     java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException:
Authentication failed, status: 403, message: GSSException: Failure unspecified at GSS-API
level (Mechanism level: Invalid argument (400) - Cannot find key of appropriate type to decrypt
AP REP - AES256 CTS mode with HMAC SHA1-96)
>         at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:464)
>         at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:456)
>         at java.security.AccessController.doPrivileged(Native Method)
>     ...
>     Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException:
Authentication failed, status: 403, message: GSSException: Failure unspecified at GSS-API
level (Mechanism level: Invalid argument (400) - Cannot find key of appropriate type to decrypt
AP REP - AES256 CTS mode with HMAC SHA1-96)
>         at org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:274)
>         at org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
>         at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:214)
>         at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
>         at org.apache.hadoop.hdfs.web.URLConnectionFactory.openConnection(URLConnectionFactory.java:161)
>         at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:461)
>         ... 30 more
>     2017-03-04 20:06:51,612 ERROR org.apache.hadoop.hdfs.server.namenode.EditLogInputStream:
Got error reading edit log input stream https://jn1.cloud.local:8481/getJournal?jid=hadoop1&segmentTxId=1&storageInfo=-63%3A1598677718%3A0%3ACID-2eecd392-dae7-480b-a867-5e0295c78648;
failing over to edit log https://jn2.cloud.local:8481/getJournal?jid=hadoop1&segmentTxId=1&storageInfo=-63%3A1598677718%3A0%3ACID-2eecd392-dae7-480b-a867-5e0295c78648
>     org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream$PrematureEOFException:
got premature end-of-file at txid 0; expected file to go up to 2
>         at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:194)
>         at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
>         at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
>         at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
>         ....
> ----------------
>     
> AES256 enabled keys are involved in both machines:
> -----------------
>     $ klist -kte hdfs.keytab | grep 'aes256'
>     5 12/31/1969 19:00:00 jn/jn1.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
>     4 12/31/1969 19:00:00 jn/jn2.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
>     4 12/31/1969 19:00:00 jn/jn3.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
>     4 12/31/1969 19:00:00 nn/hdfs-nn1.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
>     4 12/31/1969 19:00:00 nn/hdfs-nn2.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
>     3 12/31/1969 19:00:00 dn/hdfs-dn1.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
>     3 12/31/1969 19:00:00 dn/hdfs-dn2.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
>     3 12/31/1969 19:00:00 dn/hdfs-dn3.cloud.local@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
>     48 12/31/1969 19:00:00 hdfs@CLOUD.LOCAL (aes256-cts-hmac-sha1-96)
> ----------------
>     
>     
> keytab export command:
> -----------------
>     > ktpass /princ jn/jn1.cloud.local@CLOUD.LOCAL /mapuser jn1@cloud.local /rndpass
/crypto all /ptype KRB5_NT_PRINCIPAL /mapop set /out hdfs.keytab /in hdfs.keytab
>     ...
>     > ktpass /princ nn/hdfs-nn1.cloud.local@CLOUD.LOCAL /mapuser nn1@cloud.local /rndpass
/crypto all /ptype KRB5_NT_PRINCIPAL /mapop set /out hdfs.keytab /in hdfs.keytab
>     ...
>     > ktpass /princ dn/hdfs-dn1.cloud.local@CLOUD.LOCAL /mapuser dn1@cloud.local /rndpass
/crypto all /ptype KRB5_NT_PRINCIPAL /mapop set /out hdfs.keytab /in hdfs.keytab
>     ...
>     > ktpass /princ hdfs@CLOUD.LOCAL /mapuser hdfs@cloud.local /rndpass /crypto all
/ptype KRB5_NT_PRINCIPAL /mapop set /out hdfs.keytab /in hdfs.keytab
> ----------------
>     
>     
> JCE information:
> -----------------
>     Max allowed key length for AES: 2147483647
>     SUN 1.8
>     SunRsaSign 1.8
>     SunEC 1.8
>     SunJSSE 1.8
>     SunJCE 1.8
>     SunJGSS 1.8
>     SunSASL 1.8
>     XMLDSig 1.8
>     SunPCSC 1.8
> ----------------
>     
>     
> Configuration for core-site.xml:
> -----------------
>     <property>
>         <name>hadoop.security.authentication</name>
>         <value>kerberos</value>
>     </property>
>     <property>
>         <name>hadoop.security.authorization</name>
>         <value>true</value>
>     </property>
> ----------------
>     
>     
> And hdfs-site.xml:
> -----------------
>     <property>
>         <name>dfs.journalnode.keytab.file</name>
>         <value>/opt/hdfs/default/etc/hadoop/hdfs.keytab</value>
>      </property>
>     <property>
>         <name>dfs.journalnode.kerberos.principal</name>
>         <value>jn/_HOST@CLOUD.LOCAL</value>
>     </property>
>     ...
>     <property>
>         <description> path to the HDFS keytab </description>
>         <name>dfs.namenode.keytab.file</name>
>         <value>/opt/hdfs/default/etc/hadoop/hdfs.keytab</value>
>     </property>
>     <property>
>         <description>Kerberos principal name for the NameNode.</description>
>         <name>dfs.namenode.kerberos.principal</name>
>         <value>nn/_HOST@CLOUD.LOCAL</value>
>     </property>
> ----------------
>     
>     
> JAAS configuration:
> -----------------
>     Client \{
>         com.sun.security.auth.module.Krb5LoginModule required
>         useKeyTab=true
>         keyTab="/opt/hdfs/default/etc/hadoop/hdfs.keytab"
>         storeKey=true
>         useTicketCache=false
>         principal="hdfs@CLOUD.LOCAL";
>     \};
> ----------------
>     
>     
> krb5.conf file is below:
> -----------------
>     $ cat /etc/krb5.conf
>     \[logging\]
>     default = FILE:/var/log/krb5libs.log
>     kdc = FILE:/var/log/krb5kdc.log
>     admin_server = FILE:/var/log/kadmind.log
>     
>     \[libdefaults\]
>     dns_lookup_realm = false
>     ticket_lifetime = 24h
>     renew_lifetime = 7d
>     forwardable = true
>     rdns = false
>     default_realm = CLOUD.LOCAL
>     \# default_ccache_name = KEYRING:persistent:%\{uid}
>     
>     default_tkt_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 rc4-hmac
>     default_tgs_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 rc4-hmac
>     permitted_enctypes = aes256-cts-hmac-sha1-96 aes128-cts-hmac-sha1-96 rc4-hmac
>     \[realms\]
>     CLOUD.LOCAL = {
>         kdc = dc.cloud.local
>         admin_server = dc.cloud.local
>     }
>     
>     \[domain_realm\]
>     .cloud.local = CLOUD.LOCAL
>     cloud.local = CLOUD.LOCAL
> ----------------
>     
>     
> I'm using AD as Kerberos server. "Use Kerberos DES encryption types for this account"
is checked for all the accounts. Current logon user:
> ------------------
>     $ klist
>     Ticket cache: KEYRING:persistent:1179001683:krb_ccache_l7eifu1
>     Default principal: hdfs@CLOUD.LOCAL
>     
>     Valid starting       Expires              Service principal
>     03/04/2017 17:41:13  03/05/2017 03:41:13  krbtgt/CLOUD.LOCAL@CLOUD.LOCAL
>         renew until 03/11/2017 17:41:13
> ----------------
>     
>     
> Is there any thing else I should pay attention?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message