hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eric Yang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8312) Trash does not descent into child directories to check for permissions
Date Wed, 17 May 2017 01:05:04 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16013333#comment-16013333
] 

Eric Yang commented on HDFS-8312:
---------------------------------

[~daryn] The current implementation is only good for protecting common mistakes.  The hacker
proof implementation needs to come from server side trash rather than client API.  This patch
does not make HDFS client any worse than existing implementation.  I will take 5% improvement
today than waiting for the server side trash to come.

> Trash does not descent into child directories to check for permissions
> ----------------------------------------------------------------------
>
>                 Key: HDFS-8312
>                 URL: https://issues.apache.org/jira/browse/HDFS-8312
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: fs, security
>    Affects Versions: 2.2.0, 2.6.0, 2.7.2
>            Reporter: Eric Yang
>            Assignee: Weiwei Yang
>            Priority: Critical
>             Fix For: 2.9.0, 2.7.4, 3.0.0-alpha1, 2.8.1
>
>         Attachments: HDFS-8312-001.patch, HDFS-8312-002.patch, HDFS-8312-003.patch, HDFS-8312-004.patch,
HDFS-8312-005.patch, HDFS-8312-branch-2.7.patch, HDFS-8312-branch-2.8.01.patch, HDFS-8312-branch-2.8.1.001.patch,
HDFS-8312-testcase.patch
>
>
> HDFS trash does not descent into child directory to check if user has permission to delete
files.  For example:
> Run the following command to initialize directory structure as super user:
> {code}
> hadoop fs -mkdir /BSS/level1
> hadoop fs -mkdir /BSS/level1/level2
> hadoop fs -mkdir /BSS/level1/level2/level3
> hadoop fs -put /tmp/appConfig.json /BSS/level1/level2/level3/testfile.txt
> hadoop fs -chown user1:users /BSS/level1/level2/level3/testfile.txt
> hadoop fs -chown -R user1:users /BSS/level1
> hadoop fs -chown -R 750 /BSS/level1
> hadoop fs -chmod -R 640 /BSS/level1/level2/level3/testfile.txt
> hadoop fs -chmod 775 /BSS
> {code}
> Change to a normal user called user2. 
> When trash is enabled:
> {code}
> sudo su user2 -
> hadoop fs -rm -r /BSS/level1
> 15/05/01 16:51:20 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion
interval = 3600 minutes, Emptier interval = 0 minutes.
> Moved: 'hdfs://bdvs323.svl.ibm.com:9000/BSS/level1' to trash at: hdfs://bdvs323.svl.ibm.com:9000/user/user2/.Trash/Current
> {code}
> When trash is disabled:
> {code}
> /opt/ibm/biginsights/IHC/bin/hadoop fs -Dfs.trash.interval=0 -rm -r /BSS/level1
> 15/05/01 16:58:31 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion
interval = 0 minutes, Emptier interval = 0 minutes.
> rm: Permission denied: user=user2, access=ALL, inode="/BSS/level1":user1:users:drwxr-x---
> {code}
> There is inconsistency between trash behavior and delete behavior.  When trash is enabled,
files owned by user1 is deleted by user2.  It looks like trash does not recursively validate
if the child directory files can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message