hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-14104) Client should always ask namenode for kms provider path.
Date Thu, 02 Mar 2017 01:10:45 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15891394#comment-15891394

Yongjun Zhang commented on HADOOP-14104:

HI [~shahrs87],

Thanks much for working on this issue!

I had a few nits and question about the patch here:

1. Since this is public API, we should introduce a new constructor with the addiitional parameter
keyProviderUri to be backward compatible, instead of just modifying the existing one.
public class FsServerDefaults implements Writable {
  public FsServerDefaults(long blockSize, int bytesPerChecksum,
      int writePacketSize, short replication, int fileBufferSize,
      boolean encryptDataTransfer, long trashInterval,
      DataChecksum.Type checksumType,
      String keyProviderUri) {

2. Suggest to add a KEY_PROVIDER_URI_DEFAULT to replce the "" here (in both FtpConfigKeys.java
and LocalConfigKeys.java): 
  protected static FsServerDefaults getServerDefaults() throws IOException {
    return new FsServerDefaults(

3. the following method swallows IOException and return false. Suggest to remove the {{@throws
IOException}} and add a comment in the catch block about why it can be iegnored and false
should be returned here. And a
dd a space after the word {{catrch}}.
   * Probe for encryption enabled on this filesystem.
   * See {@link DFSUtilClient#isHDFSEncryptionEnabled(Configuration)}
   * @return true if encryption is enabled
   * @throws IOException
  public boolean isHDFSEncryptionEnabled() {
    try {
      return DFSUtilClient.isHDFSEncryptionEnabled(getKeyProviderUri());
    } catch(IOException ioe) {
      return false;

4. Line 84 of KeyProviderCache.java (not introduced by your changei)
LOG.error("Could not create KeyProvider for DFSClient !!", e.getCause());
suggest to replace e.getCause() with e, so we can see the full stack.

5. Currently {{getServerDefaults()}} contact NN every hour, to find if there is any update
of keyprovider. If keyprovider changed within the hour,
client code may get into exception, wonder if we have mechanism to handle the exception and
update the keyprovider and try again?

> Client should always ask namenode for kms provider path.
> --------------------------------------------------------
>                 Key: HADOOP-14104
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14104
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: kms
>            Reporter: Rushabh S Shah
>            Assignee: Rushabh S Shah
>         Attachments: HADOOP-14104-trunk.patch, HADOOP-14104-trunk-v1.patch
> According to current implementation of kms provider in client conf, there can only be
one kms.
> In multi-cluster environment, if a client is reading encrypted data from multiple clusters
it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org

View raw message