hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Manoj Govindassamy (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HDFS-12357) Let NameNode to bypass external attribute provider for special user
Date Wed, 06 Sep 2017 20:11:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16155925#comment-16155925
] 

Manoj Govindassamy edited comment on HDFS-12357 at 9/6/17 8:10 PM:
-------------------------------------------------------------------

Thanks for working on the patch revision [~yzhangal]. Overall looks good to me. +1. few nits.

1. {{FSDirectory.java#initUsersToBypassExtProvider}}
{noformat}
373	    List<String> bpUserList = new ArrayList<String>();
374	    for(int i = 0; i < bypassUsers.length; i++) {
375	      String tmp = bypassUsers[i].trim();
376	      if (!tmp.isEmpty()) {
377	        bpUserList.add(tmp);
378	      }
379	    }
380	    if (bpUserList.size() > 0) {
381	      usersToBypassExtAttrProvider = new HashSet<String>();
382	      for(String user : bpUserList) {
383	        LOG.info("Add user " + user + " to the list that will bypass external"
384	            + " attribute provider.");
385	        usersToBypassExtAttrProvider.add(user.trim());
386	      }
387	    }
{noformat}

The above 2 for loops can be simplified to 1 loop. Checking for trim and adding to _usersToBypassExtAttrProvider_
can be done in the same block.

2. {{TestINodeAttributeProvider}}
{noformat}
240	    String[] bypassUsers = {"u2", "u3"};
{noformat}
Can we please add "u4" also this list? yes, since this user is not in the bypass list the
getFileStatus is going to differ from other users. Adding this non bypass user will make the
test complete.


was (Author: manojg):
Thanks for working on the patch revision [~yzhangal]. Overall looks good to me. +1, pending
following nits.

1. {{FSDirectory.java#initUsersToBypassExtProvider}}
{noformat}
373	    List<String> bpUserList = new ArrayList<String>();
374	    for(int i = 0; i < bypassUsers.length; i++) {
375	      String tmp = bypassUsers[i].trim();
376	      if (!tmp.isEmpty()) {
377	        bpUserList.add(tmp);
378	      }
379	    }
380	    if (bpUserList.size() > 0) {
381	      usersToBypassExtAttrProvider = new HashSet<String>();
382	      for(String user : bpUserList) {
383	        LOG.info("Add user " + user + " to the list that will bypass external"
384	            + " attribute provider.");
385	        usersToBypassExtAttrProvider.add(user.trim());
386	      }
387	    }
{noformat}

The above 2 for loops can be simplified to 1 loop. Checking for trim and adding to _usersToBypassExtAttrProvider_
can be done in the same block.

2. {{TestINodeAttributeProvider}}
{noformat}
240	    String[] bypassUsers = {"u2", "u3"};
{noformat}
Can we please add "u4" also this list? yes, since this user is not in the bypass list the
getFileStatus is going to differ from other users. Adding this non bypass user will make the
test complete.

> Let NameNode to bypass external attribute provider for special user
> -------------------------------------------------------------------
>
>                 Key: HDFS-12357
>                 URL: https://issues.apache.org/jira/browse/HDFS-12357
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Yongjun Zhang
>            Assignee: Yongjun Zhang
>         Attachments: HDFS-12357.001a.patch, HDFS-12357.001b.patch, HDFS-12357.001.patch,
HDFS-12357.002.patch, HDFS-12357.003.patch, HDFS-12357.004.patch, HDFS-12357.005.patch, HDFS-12357.006.patch
>
>
> This is a third proposal to solve the problem described in HDFS-12202.
> The problem is, when we do distcp from one cluster to another (or within the same cluster),
in addition to copying file data, we copy the metadata from source to target. If external
attribute provider is enabled, the metadata may be read from the provider, thus provider data
read from source may be saved to target HDFS. 
> We want to avoid saving metadata from external provider to HDFS, so we want to bypass
external provider when doing the distcp (or hadoop fs -cp) operation.
> Two alternative approaches were proposed earlier, one in HDFS-12202, the other in HDFS-12294.
The proposal here is the third one.
> The idea is, we introduce a new config, that specifies a special user (or a list of users),
and let NN bypass external provider when the current user is a special user.
> If we run applications as the special user that need data from external attribute provider,
then it won't work. So the constraint on this approach is, the special users here should not
run applications that need data from external provider.
> Thanks [~asuresh] for proposing this idea and [~chris.douglas], [~daryn], [~manojg] for
the discussions in the other jiras. 
> I'm creating this one to discuss further.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message