hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-14104) Client should always ask namenode for kms provider path.
Date Fri, 03 Mar 2017 06:19:45 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15893802#comment-15893802
] 

Yongjun Zhang commented on HADOOP-14104:
----------------------------------------

Thanks [~andrew.wang], good comments.

Hi [~daryn],

I like the sound of your proposal too
{quote}
 I think the cleanest/most-compatible way is leveraging the Credentials instead of the config.
We could inject a mapping of filesystem uri to kms uri via the secrets map. So now when the
client needs to talk to the kms it can check the map, else fallback to getServerDefaults.
{quote}

Did you mean to use the following UserProvider method
{code}
  @Override
  public synchronized CredentialEntry createCredentialEntry(String name, char[] credential)

      throws IOException {
    Text nameT = new Text(name);
    if (credentials.getSecretKey(nameT) != null) {
      throw new IOException("Credential " + name + 
          " already exists in " + this);
    }
    credentials.addSecretKey(new Text(name), 
        new String(credential).getBytes("UTF-8"));
    return new CredentialEntry(name, credential);
  }
{code}
to add <fs-uri, keyProvider> mapping to the credential map? This mapping info for a
remote cluster need to come from either the remote cluster conf, or the NN of the remote cluster,
 what's your thinking here?

Would you please elaborate this approach? Is there any in-compatibility here?
 
Thanks.


> Client should always ask namenode for kms provider path.
> --------------------------------------------------------
>
>                 Key: HADOOP-14104
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14104
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: kms
>            Reporter: Rushabh S Shah
>            Assignee: Rushabh S Shah
>         Attachments: HADOOP-14104-trunk.patch, HADOOP-14104-trunk-v1.patch
>
>
> According to current implementation of kms provider in client conf, there can only be
one kms.
> In multi-cluster environment, if a client is reading encrypted data from multiple clusters
it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message