Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 14297200B69 for ; Fri, 5 Aug 2016 18:00:24 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 12DCD160A8E; Fri, 5 Aug 2016 16:00:24 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 620D3160AAC for ; Fri, 5 Aug 2016 18:00:23 +0200 (CEST) Received: (qmail 94529 invoked by uid 500); 5 Aug 2016 16:00:22 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 94323 invoked by uid 99); 5 Aug 2016 16:00:22 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 05 Aug 2016 16:00:22 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 16F582C0D64 for ; Fri, 5 Aug 2016 16:00:22 +0000 (UTC) Date: Fri, 5 Aug 2016 16:00:22 +0000 (UTC) From: "John Zhuge (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-10721) HDFS NFS Gateway - Exporting multiple Directories MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Fri, 05 Aug 2016 16:00:24 -0000 [ https://issues.apache.org/jira/browse/HDFS-10721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409617#comment-15409617 ] John Zhuge commented on HDFS-10721: ----------------------------------- Sorry for the confusion: {{c_user}} is a new HDFS user with read-only access to {{/data}}, created specifically to provide a workaround. I should have named the user {{readonly_b_webapp}} :) Do agree with you on that export table like Unix NFSv3 or NFSv4 server gives the admin more controls. The export table probably should support allowed client list and export options per export point. > HDFS NFS Gateway - Exporting multiple Directories > -------------------------------------------------- > > Key: HDFS-10721 > URL: https://issues.apache.org/jira/browse/HDFS-10721 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs > Reporter: Senthilkumar > Priority: Minor > > Current HDFS NFS gateway Supports exporting only one Directory.. > Example : > > nfs.export.point > /user > > This property helps us to export particular directory .. > Code Block : > public RpcProgramMountd(NfsConfiguration config, > DatagramSocket registrationSocket, boolean allowInsecurePorts) > throws IOException { > // Note that RPC cache is not enabled > super("mountd", "localhost", config.getInt( > NfsConfigKeys.DFS_NFS_MOUNTD_PORT_KEY, > NfsConfigKeys.DFS_NFS_MOUNTD_PORT_DEFAULT), PROGRAM, VERSION_1, > VERSION_3, registrationSocket, allowInsecurePorts); > exports = new ArrayList(); > exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY, > NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT)); > this.hostsMatcher = NfsExports.getInstance(config); > this.mounts = Collections.synchronizedList(new ArrayList()); > UserGroupInformation.setConfiguration(config); > SecurityUtil.login(config, NfsConfigKeys.DFS_NFS_KEYTAB_FILE_KEY, > NfsConfigKeys.DFS_NFS_KERBEROS_PRINCIPAL_KEY); > this.dfsClient = new DFSClient(NameNode.getAddress(config), config); > } > Export List: > exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY, > NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT)); > Current Code is supporting only one directory to be exposed ... Based on our example /user can be exported .. > Most of the production environment expects more number of directories should be exported and the same can be mounted for different clients.. > Example: > > nfs.export.point > /user,/data/web_crawler,/app-logs > > Here i have three directories to be exposed .. > 1) /user > 2) /data/web_crawler > 3) /app-logs > This would help us to mount directories for particular client ( Say client A wants to write data in /app-logs - Hadoop Admin can mount and handover to clients ). > Please advise here.. Sorry if this feature is already implemented.. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org