hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hajime Osako (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-13005) HttpFs checks subdirectories ACL status when LISTSTATUS is used
Date Tue, 09 Jan 2018 23:26:00 GMT

     [ https://issues.apache.org/jira/browse/HDFS-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Hajime Osako updated HDFS-13005:
--------------------------------
    Description: 
HttpFs LISTSTATUS call fails if a subdirectory is using ACL because in org.apache.hadoop.fs.http.server.FSOperations.StatusPairs#StatusPairs,
it gets the list of child objects and checks those ACL status one by one, rather than checking
the target directory ACL.
Would like to know if this is intentional.

{code}
      /*
       * For each FileStatus, attempt to acquire an AclStatus.  If the
       * getAclStatus throws an exception, we assume that ACLs are turned
       * off entirely and abandon the attempt.
       */
      boolean useAcls = true;   // Assume ACLs work until proven otherwise
      ...
{code}

Reproduce steps:
{code}
# NOTE: The test user "admin" has full access to /acltest
[root@sandbox ~]# hdfs dfs -ls -R /acltest
drwxrwx---+  - hdfs test          0 2018-01-09 08:44 /acltest/subdir
-rwxrwx---   1 hdfs test        647 2018-01-09 08:44 /acltest/subdir/derby.log
drwxr-xr-x   - hdfs test          0 2018-01-09 09:15 /acltest/subdir2
[root@sandbox ~]# hdfs dfs -getfacl /acltest/subdir
# file: /acltest/subdir
# owner: hdfs
# group: test
user::rwx
user:hdfs:rw-
group::r-x
mask::rwx
other::---

# WebHDFS works
[root@sandbox ~]# sudo -u admin curl --negotiate -u : "http://`hostname -f`:50070/webhdfs/v1/acltest?op=LISTSTATUS"
{"FileStatuses":{"FileStatus":[
{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":1,"fileId":79057,"group":"test","length":0,"modificationTime":1515487493078,"owner":"hdfs","pathSuffix":"subdir","permission":"770","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":79059,"group":"test","length":0,"modificationTime":1515489337849,"owner":"hdfs","pathSuffix":"subdir2","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
]}}

# Bat not via HttpFs
[root@sandbox ~]# sudo -u admin curl --negotiate -u : "http://`hostname -f`:14000/webhdfs/v1/acltest?op=LISTSTATUS"
{"RemoteException":{"message":"Permission denied: user=admin, access=EXECUTE, inode=\"\/acltest\/subdir\":hdfs:test:drwxrwx---","exception":"AccessControlException","javaClassName":"org.apache.hadoop.security.AccessControlException"}}

# HDFS audit log
[root@sandbox ~]# tail /var/log/hadoop/hdfs/hdfs-audit.log | grep -w admin
2018-01-09 23:09:24,362 INFO FSNamesystem.audit: allowed=true   ugi=admin (auth:KERBEROS)
      ip=/172.18.0.2  cmd=listStatus  src=/acltest    dst=null        perm=null       proto=webhdfs
2018-01-09 23:09:31,937 INFO FSNamesystem.audit: allowed=true   ugi=admin (auth:PROXY) via
httpfs/sandbox.hortonworks.com@EXAMPLE.COM (auth:KERBEROS)   ip=/172.18.0.2  cmd=listStatus
 src=/acltest    dst=null        perm=null       proto=rpc
2018-01-09 23:09:31,978 INFO FSNamesystem.audit: allowed=false  ugi=admin (auth:PROXY) via
httpfs/sandbox.hortonworks.com@EXAMPLE.COM (auth:KERBEROS)   ip=/172.18.0.2  cmd=getAclStatus
       src=/acltest/subdir     dst=null        perm=null       proto=rpc
{code}

  was:
HttpFs LISTSTATUS call fails if a subdirectory is using ACL because in org.apache.hadoop.fs.http.server.FSOperations.StatusPairs#StatusPairs,
it gets the list of child objects and checks those ACL status one by one, rather than checking
the target directory ACL.
Would like to know if this is intentional.

{code}
      /*
       * For each FileStatus, attempt to acquire an AclStatus.  If the
       * getAclStatus throws an exception, we assume that ACLs are turned
       * off entirely and abandon the attempt.
       */
      boolean useAcls = true;   // Assume ACLs work until proven otherwise
      ...
{code}

Reproduce steps:
{code}
[root@sandbox ~]# hdfs dfs -ls -R /acltest
drwxrwx---+  - hdfs test          0 2018-01-09 08:44 /acltest/subdir
-rwxrwx---   1 hdfs test        647 2018-01-09 08:44 /acltest/subdir/derby.log
drwxr-xr-x   - hdfs test          0 2018-01-09 09:15 /acltest/subdir2
[root@sandbox ~]# hdfs dfs -getfacl /acltest/subdir
# file: /acltest/subdir
# owner: hdfs
# group: test
user::rwx
user:hdfs:rw-
group::r-x
mask::rwx
other::---

# WebHDFS works
[root@sandbox ~]# sudo -u admin curl --negotiate -u : "http://`hostname -f`:50070/webhdfs/v1/acltest?op=LISTSTATUS"
{"FileStatuses":{"FileStatus":[
{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":1,"fileId":79057,"group":"test","length":0,"modificationTime":1515487493078,"owner":"hdfs","pathSuffix":"subdir","permission":"770","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":79059,"group":"test","length":0,"modificationTime":1515489337849,"owner":"hdfs","pathSuffix":"subdir2","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
]}}

# Bat not via HttpFs
[root@sandbox ~]# sudo -u admin curl --negotiate -u : "http://`hostname -f`:14000/webhdfs/v1/acltest?op=LISTSTATUS"
{"RemoteException":{"message":"Permission denied: user=admin, access=EXECUTE, inode=\"\/acltest\/subdir\":hdfs:test:drwxrwx---","exception":"AccessControlException","javaClassName":"org.apache.hadoop.security.AccessControlException"}}

# HDFS audit log
[root@sandbox ~]# tail /var/log/hadoop/hdfs/hdfs-audit.log | grep -w admin
2018-01-09 23:09:24,362 INFO FSNamesystem.audit: allowed=true   ugi=admin (auth:KERBEROS)
      ip=/172.18.0.2  cmd=listStatus  src=/acltest    dst=null        perm=null       proto=webhdfs
2018-01-09 23:09:31,937 INFO FSNamesystem.audit: allowed=true   ugi=admin (auth:PROXY) via
httpfs/sandbox.hortonworks.com@EXAMPLE.COM (auth:KERBEROS)   ip=/172.18.0.2  cmd=listStatus
 src=/acltest    dst=null        perm=null       proto=rpc
2018-01-09 23:09:31,978 INFO FSNamesystem.audit: allowed=false  ugi=admin (auth:PROXY) via
httpfs/sandbox.hortonworks.com@EXAMPLE.COM (auth:KERBEROS)   ip=/172.18.0.2  cmd=getAclStatus
       src=/acltest/subdir     dst=null        perm=null       proto=rpc
{code}


> HttpFs checks subdirectories ACL status when LISTSTATUS is used
> ---------------------------------------------------------------
>
>                 Key: HDFS-13005
>                 URL: https://issues.apache.org/jira/browse/HDFS-13005
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: httpfs
>    Affects Versions: 2.7.3
>            Reporter: Hajime Osako
>            Priority: Minor
>
> HttpFs LISTSTATUS call fails if a subdirectory is using ACL because in org.apache.hadoop.fs.http.server.FSOperations.StatusPairs#StatusPairs,
it gets the list of child objects and checks those ACL status one by one, rather than checking
the target directory ACL.
> Would like to know if this is intentional.
> {code}
>       /*
>        * For each FileStatus, attempt to acquire an AclStatus.  If the
>        * getAclStatus throws an exception, we assume that ACLs are turned
>        * off entirely and abandon the attempt.
>        */
>       boolean useAcls = true;   // Assume ACLs work until proven otherwise
>       ...
> {code}
> Reproduce steps:
> {code}
> # NOTE: The test user "admin" has full access to /acltest
> [root@sandbox ~]# hdfs dfs -ls -R /acltest
> drwxrwx---+  - hdfs test          0 2018-01-09 08:44 /acltest/subdir
> -rwxrwx---   1 hdfs test        647 2018-01-09 08:44 /acltest/subdir/derby.log
> drwxr-xr-x   - hdfs test          0 2018-01-09 09:15 /acltest/subdir2
> [root@sandbox ~]# hdfs dfs -getfacl /acltest/subdir
> # file: /acltest/subdir
> # owner: hdfs
> # group: test
> user::rwx
> user:hdfs:rw-
> group::r-x
> mask::rwx
> other::---
> # WebHDFS works
> [root@sandbox ~]# sudo -u admin curl --negotiate -u : "http://`hostname -f`:50070/webhdfs/v1/acltest?op=LISTSTATUS"
> {"FileStatuses":{"FileStatus":[
> {"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":1,"fileId":79057,"group":"test","length":0,"modificationTime":1515487493078,"owner":"hdfs","pathSuffix":"subdir","permission":"770","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
> {"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":79059,"group":"test","length":0,"modificationTime":1515489337849,"owner":"hdfs","pathSuffix":"subdir2","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
> ]}}
> # Bat not via HttpFs
> [root@sandbox ~]# sudo -u admin curl --negotiate -u : "http://`hostname -f`:14000/webhdfs/v1/acltest?op=LISTSTATUS"
> {"RemoteException":{"message":"Permission denied: user=admin, access=EXECUTE, inode=\"\/acltest\/subdir\":hdfs:test:drwxrwx---","exception":"AccessControlException","javaClassName":"org.apache.hadoop.security.AccessControlException"}}
> # HDFS audit log
> [root@sandbox ~]# tail /var/log/hadoop/hdfs/hdfs-audit.log | grep -w admin
> 2018-01-09 23:09:24,362 INFO FSNamesystem.audit: allowed=true   ugi=admin (auth:KERBEROS)
      ip=/172.18.0.2  cmd=listStatus  src=/acltest    dst=null        perm=null       proto=webhdfs
> 2018-01-09 23:09:31,937 INFO FSNamesystem.audit: allowed=true   ugi=admin (auth:PROXY)
via httpfs/sandbox.hortonworks.com@EXAMPLE.COM (auth:KERBEROS)   ip=/172.18.0.2  cmd=listStatus
 src=/acltest    dst=null        perm=null       proto=rpc
> 2018-01-09 23:09:31,978 INFO FSNamesystem.audit: allowed=false  ugi=admin (auth:PROXY)
via httpfs/sandbox.hortonworks.com@EXAMPLE.COM (auth:KERBEROS)   ip=/172.18.0.2  cmd=getAclStatus
       src=/acltest/subdir     dst=null        perm=null       proto=rpc
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message