hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rakesh Radhakrishnan <rake...@apache.org>
Subject Re: HDFS ACL | Unable to define ACL automatically for child folders
Date Mon, 19 Sep 2016 04:26:02 GMT
It looks like '/user/test3' has owner '"hdfs" and denying the access while
performing operations via "shashi" user. One idea is to recursively set ACL
to sub-directories and files as follows:

             hdfs dfs -setfacl -R -m default:user:shashi:rwx /user

            -R, option can be used to apply operations to all files and
directories recursively.

Regards,
Rakesh

On Sun, Sep 18, 2016 at 8:53 PM, Shashi Vishwakarma <
shashi.vish123@gmail.com> wrote:

> I have following scenario. There is parent folder /user with five child
> folder as test1 , test2, test3 etc in HDFS.
>
>     /user/test1
>     /user/test2
>     /user/test3
>
> I applied acl on parent folder to make sure user has automatically access
> to child folder.
>
>      hdfs dfs -setfacl -m default:user:shashi:rwx /user
>
>
> but when i try to put some file , it is giving permission denied exception
>
>     hadoop fs -put test.txt  /user/test3
>     put: Permission denied: user=shashi, access=WRITE,
> inode="/user/test3":hdfs:supergroup:drwxr-xr-x
>
> **getfacl output**
>
>     hadoop fs -getfacl /user/test3
>     # file: /user/test3
>     # owner: hdfs
>     # group: supergroup
>     user::rwx
>     group::r-x
>     other::r-x
>
> Any pointers on this?
>
> Thanks
> Shashi
>

Mime
View raw message