hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Nauroth <cnaur...@hortonworks.com>
Subject Re: HDFS ACL | Unable to define ACL automatically for child folders
Date Mon, 19 Sep 2016 16:43:48 GMT
Hello Shashi,

It appears that you have applied a default ACL to /user, then attempted to put a file in /user,
and you are expecting the default ACL to grant authorization for user sashi to do that.  A
default ACL does not influence the actual permission checks performed by HDFS, so if user
sashi does not have the necessary access through simple HDFS permissions, then the default
ACL won’t grant access.

If your goal is to allow user sashi to put a file into /user, then perhaps what you want to
do is add an access ACL entry instead of a default ACL entry.  To do that, remove the "default:"
prefix from the ACL entry in your setfacl command, so "user:sashi:rwx".

A default ACL only defines what ACL entries automatically get applied to new directories and
files that get created under that directory.  Note that applying a default ACL does not alter
anything for sub-directories that already exist.  The default ACL is copied from parent to
child at the time of creation of the file or sub-directory.  In your example, if /user/test1,
/user/test2 and /user/test3 already existed before you ran the setfacl command, then nothing
would have been changed for those directories.  However, if after the setfacl command, you
ran something like "hdfs dfs -mkdir /users/test4", then the default ACL of /user would be
copied down to /users/test4 as both its default ACL and access ACL.

For more details on the differences between an access ACL and a default ACL, please refer
to the HDFS Permissions Guide documentation.


--Chris Nauroth

From: Shashi Vishwakarma <shashi.vish123@gmail.com>
Date: Monday, September 19, 2016 at 12:16 AM
To: Rakesh Radhakrishnan <rakeshr@apache.org>
Cc: "user.hadoop" <user@hadoop.apache.org>
Subject: Re: HDFS ACL | Unable to define ACL automatically for child folders

Thanks a lot Rakesh. Above information is very much helpful.


On Mon, Sep 19, 2016 at 12:39 PM, Rakesh Radhakrishnan <rakeshr@apache.org<mailto:rakeshr@apache.org>>
AFAIK, there is no java API available. Perhaps you could do recursive directory listing for
a path and invokes #setAcl java api for each.


On Mon, Sep 19, 2016 at 11:22 AM, Shashi Vishwakarma <shashi.vish123@gmail.com<mailto:shashi.vish123@gmail.com>>

Thanks Rakesh.

Just last question, is there any Java API available for recursively applying ACL or I need
to iterate on all folders of dir and apply acl for each?


On 19 Sep 2016 9:56 am, "Rakesh Radhakrishnan" <rakeshr@apache.org<mailto:rakeshr@apache.org>>
It looks like '/user/test3' has owner '"hdfs" and denying the access while performing operations
via "shashi" user. One idea is to recursively set ACL to sub-directories and files as follows:

             hdfs dfs -setfacl -R -m default:user:shashi:rwx /user

            -R, option can be used to apply operations to all files and directories recursively.


On Sun, Sep 18, 2016 at 8:53 PM, Shashi Vishwakarma <shashi.vish123@gmail.com<mailto:shashi.vish123@gmail.com>>
I have following scenario. There is parent folder /user with five child folder as test1 ,
test2, test3 etc in HDFS.


I applied acl on parent folder to make sure user has automatically access to child folder.

     hdfs dfs -setfacl -m default:user:shashi:rwx /user

but when i try to put some file , it is giving permission denied exception

    hadoop fs -put test.txt  /user/test3
    put: Permission denied: user=shashi, access=WRITE, inode="/user/test3":hdfs:supergroup:drwxr-xr-x

**getfacl output**

    hadoop fs -getfacl /user/test3
    # file: /user/test3
    # owner: hdfs
    # group: supergroup

Any pointers on this?


View raw message