hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rushabh S Shah (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-13390) AccessControlException for overwrite but not for delete
Date Tue, 03 Apr 2018 14:53:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424130#comment-16424130

Rushabh S Shah commented on HDFS-13390:

bq. . So why overwriting file will produce AccessControlException and not the delete method?
For {{startFileInt witgh overWrite flage}}, it resolves to all these arguments:
  checkPermission(iip, false, null, null, FsAction.WRITE, null, false)

For {{delete}} path, it resolves to all these arguments:
fsd.checkPermission(iip, false, null, FsAction.WRITE, null, FsAction.ALL, true);

Here are all the {{checkPermission}} parameters.
checkPermission(INodesInPath inodesInPath, boolean doCheckOwner,
      FsAction ancestorAccess, FsAction parentAccess, FsAction access,
      FsAction subAccess, boolean ignoreEmptyDir)

If you notice {{access}} parameter, it is null for {{delete}} and it is {{FsAction.WRITE}}
for {{startFileInt}}.
That means it will skip the checking whether file is writable for {{delete}} and will check
whether file is writable by {{currentUser}}.
I don't have much context why the behavior is different.
IMO it should be same but you can provide a patch and will let others review the patch.

> AccessControlException for overwrite but not for delete
> -------------------------------------------------------
>                 Key: HDFS-13390
>                 URL: https://issues.apache.org/jira/browse/HDFS-13390
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 2.9.0
>         Environment: *Environment:*
> OS: Centos
> PyArrow Version: 0.8.0
> Python version: 3.6
> HDFS: 2.9
>            Reporter: Nasir Ali
>            Priority: Minor
> *Problem:*
> I have a file (F-1) saved in HDFS with permissions set to "-rw-r--r--" with user "cnali".
User "nndugudi" cannot overwrite F-1 (vice versa). hdfs.write will generate following exception:
> org.apache.hadoop.security.AccessControlException: Permission denied: user=nndugudi,
access=WRITE, inode="/cerebralcortex/data/00000000-f81c-44d2-9db8-fea69f468d58/00000000-5087-3d56-ad0e-0b27c3c83182/20171105.gz":cnali:supergroup:-rw-r--r--
> However, user "nndugudi" can delete the file without any problem. So why overwriting
file will produce AccessControlException and not the delete method?
> *Sample Code*:
> File: [https://github.com/MD2Korg/CerebralCortex/blob/master/cerebralcortex/core/data_manager/raw/stream_handler.py]
> LOC: 659-705 (write_hdfs_day_file)
> *HDFS Configurations*:
> All configurations are set to default. Security is also disabled as of now.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message