hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11100) Recursively deleting file protected by sticky bit should fail
Date Wed, 30 Nov 2016 06:09:58 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15707672#comment-15707672
] 

Yongjun Zhang commented on HDFS-11100:
--------------------------------------

Hi [~jzhuge],

Thanks for reporting the issue and the patch. I did a review and it looks good overall. I
have the following comments:

1.  Some optimization can be done about the sticky bit checking, to avoid unnecessary memory
allocations (assuming all children of the same dir contains same number of components in their
paths, and share the same parent path components): 
{code}
        INodeAttributes[] childInodeAttrs = null;
        byte[][] childComponents = null;
        int childCompIdx = 0;
        for(INode child : cList) {
          if (childComponents == null) {
            childComponents = child.getPathComponents();
            childCompIdx = childComponents.length-1;
          } else {
            childComponents[childCompIdx] =
                child.getLocalNameBytes();
          }
          if (childInodeAttrs == null) {
            childInodeAttrs =
                new INodeAttributes[childComponents.length];
          }
{code}

2. {{testStickyBitRecursiveDeleteFile()}}  and {{testStickyBitRecursiveDeleteDir()}} have
quite some code in common, suggest to consolidate. At least the bottom part starting from
"try" can be shared. The only diff is whether the child is a file or directory. We may not
even need to create file under "dir"  for  {{testStickyBitRecursiveDeleteDir()}}. 

3. The tests are to make sure that exception is throw when sticky bit is set and deletion
is run with different user. Can we add another positive test that when sticky bit is set and
the deletion is run successfully when as the same user?

4. I noticed that linux may continue to delete the children recursively even if a parent is
protected by sticky bit. HDFS may have a different semantics here, that is, it will throw
an exception without going down the tree. I just want to make a note here.

Thanks.



> Recursively deleting file protected by sticky bit should fail
> -------------------------------------------------------------
>
>                 Key: HDFS-11100
>                 URL: https://issues.apache.org/jira/browse/HDFS-11100
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 2.6.0
>            Reporter: John Zhuge
>            Assignee: John Zhuge
>            Priority: Critical
>              Labels: permissions
>         Attachments: HDFS-11100.001.patch, HDFS-11100.002.patch, HDFS-11100.002.patch,
hdfs_cmds
>
>
> Recursively deleting a directory that contains files or directories protected by sticky
bit should fail but it doesn't in HDFS. In the case below, {{/tmp/test/sticky_dir/f2}} is
protected by sticky bit, thus recursive deleting {{/tmp/test/sticky_dir}} should fail.
> {noformat}
> + hdfs dfs -ls -R /tmp/test
> drwxrwxrwt   - jzhuge supergroup          0 2016-11-03 18:08 /tmp/test/sticky_dir
> -rwxrwxrwx   1 jzhuge supergroup          0 2016-11-03 18:08 /tmp/test/sticky_dir/f2
> + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2
> rm: Permission denied by sticky bit: user=hadoop, path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx,
parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt
> + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir
> Deleted /tmp/test/sticky_dir
> {noformat}
> Centos 6.4 behavior:
> {noformat}
> $ ls -lR /tmp/test
> /tmp/test: 
> total 4
> drwxrwxrwt 2 systest systest 4096 Nov  3 18:36 sbit
> /tmp/test/sbit:
> total 0
> -rw-rw-rw- 1 systest systest 0 Nov  2 13:45 f2
> $ sudo -u mapred rm -fr /tmp/test/sbit
> rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted
> $ chmod -t /tmp/test/sbit
> $ sudo -u mapred rm -fr /tmp/test/sbit
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message