hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Senthilkumar (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10721) HDFS NFS Gateway - Exporting multiple Directories
Date Mon, 07 Nov 2016 14:07:58 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15644260#comment-15644260
] 

Senthilkumar commented on HDFS-10721:
-------------------------------------

Hi [~benoyantony] / [~jzhuge] , Can you Pls review the attached Patch and let me know your
if you want to improve this .. 

Added new method  exportPointToList() in RpcProgramMountd  to parse the comma separated string
2 list.
Added new Test Case testMultipleExportPoint in  TestExportsTable.

> HDFS NFS Gateway - Exporting multiple Directories 
> --------------------------------------------------
>
>                 Key: HDFS-10721
>                 URL: https://issues.apache.org/jira/browse/HDFS-10721
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs
>            Reporter: Senthilkumar
>            Assignee: Senthilkumar
>            Priority: Minor
>              Labels: patch
>         Attachments: HDFS-10721.001.patch
>
>
> Current HDFS NFS gateway Supports exporting only one Directory.. 
> Example :  
>    <property>
>           <name>nfs.export.point</name>
>           <value>/user</value>
>      </property>
> This property helps us to export particular directory .. 
> Code Block : 
> public RpcProgramMountd(NfsConfiguration config,
>       DatagramSocket registrationSocket, boolean allowInsecurePorts)
>       throws IOException {
>     // Note that RPC cache is not enabled
>     super("mountd", "localhost", config.getInt(
>         NfsConfigKeys.DFS_NFS_MOUNTD_PORT_KEY,
>         NfsConfigKeys.DFS_NFS_MOUNTD_PORT_DEFAULT), PROGRAM, VERSION_1,
>         VERSION_3, registrationSocket, allowInsecurePorts);
>     exports = new ArrayList<String>();
>      exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
>         NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
>     this.hostsMatcher = NfsExports.getInstance(config);
>     this.mounts = Collections.synchronizedList(new ArrayList<MountEntry>());
>     UserGroupInformation.setConfiguration(config);
>     SecurityUtil.login(config, NfsConfigKeys.DFS_NFS_KEYTAB_FILE_KEY,
>         NfsConfigKeys.DFS_NFS_KERBEROS_PRINCIPAL_KEY);
>     this.dfsClient = new DFSClient(NameNode.getAddress(config), config);
>   }
> Export List:
> exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
>         NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
> Current Code is supporting only one directory to be exposed ... Based on our example
/user can be exported ..
> Most of the production environment expects more number of directories should be exported
and the same can be mounted for different clients.. 
> Example: 
> <property>
>           <name>nfs.export.point</name>
>           <value>/user,/data/web_crawler,/app-logs</value>
>      </property>
> Here i have three directories to be exposed ..
> 1)    /user
> 2)   /data/web_crawler
> 3)   /app-logs
> This would help us  to mount directories for particular client ( Say client A wants to
write data in /app-logs - Hadoop Admin can mount and handover to clients  ).
> Please advise here..  Sorry if this feature is already implemented.. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message