hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daryn Sharp (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8312) Trash does not descent into child directories to check for permissions
Date Tue, 16 May 2017 21:50:04 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16013167#comment-16013167
] 

Daryn Sharp commented on HDFS-8312:
-----------------------------------

bq. The patch fixed the rename API by adding the permission check of delete when the destination
is to the trash directory, this has to be fixed otherwise the it exposes the security hole
that malicious user would use rename to move other people's file/dir to trash and subsequently
got deleted.

I lost track of this jira until I saw it being backported.  I'll reiterate, bluntly this time,
that +this patch is completely worthless from a security perspective+.  It's an honor-system
based sanity check for the good users.  A malicious user is never going to pass the flag to
request the permission subcheck.  Why even hack fs -rm to remove the flag when you can just
use fs -mv?

bq.  Suppose userA has no privilege to delete fileB, directly FS.delete(fileB) will fail.
However FS.rename(fileB, fileBinTrash) would success because it only checks the write access
to parent of fileB and write access to ancestor of fileBinTrash.

Yes, rename/delete modify a directory which only requires write privs.  That's POSIX semantics.
 Small corrections, assuming user has write privs to a specific dir:
# delete(fileB) does and should succeed regardless of fileB permissions – ignoring sticky
bit rules for simplicity
# delete(dirB) will fail if dirB is non-empty and the user has no permission.  the user has
to descend the tree (read privs), and remove the children (write privs)
# rename always works on a file or subdir regardless of the permission on either.

––

Consider a *nix system.  Let's say I foolishly have a single volume for the entire system,
and I run tmpwatch to delete old stuff in /tmp.  It's the same situation.  If I have write
privs to a directory, I can move anything in it to /tmp and it'll get blown away.

> Trash does not descent into child directories to check for permissions
> ----------------------------------------------------------------------
>
>                 Key: HDFS-8312
>                 URL: https://issues.apache.org/jira/browse/HDFS-8312
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: fs, security
>    Affects Versions: 2.2.0, 2.6.0, 2.7.2
>            Reporter: Eric Yang
>            Assignee: Weiwei Yang
>            Priority: Critical
>             Fix For: 2.9.0, 2.7.4, 3.0.0-alpha1, 2.8.1
>
>         Attachments: HDFS-8312-001.patch, HDFS-8312-002.patch, HDFS-8312-003.patch, HDFS-8312-004.patch,
HDFS-8312-005.patch, HDFS-8312-branch-2.7.patch, HDFS-8312-branch-2.8.01.patch, HDFS-8312-testcase.patch
>
>
> HDFS trash does not descent into child directory to check if user has permission to delete
files.  For example:
> Run the following command to initialize directory structure as super user:
> {code}
> hadoop fs -mkdir /BSS/level1
> hadoop fs -mkdir /BSS/level1/level2
> hadoop fs -mkdir /BSS/level1/level2/level3
> hadoop fs -put /tmp/appConfig.json /BSS/level1/level2/level3/testfile.txt
> hadoop fs -chown user1:users /BSS/level1/level2/level3/testfile.txt
> hadoop fs -chown -R user1:users /BSS/level1
> hadoop fs -chown -R 750 /BSS/level1
> hadoop fs -chmod -R 640 /BSS/level1/level2/level3/testfile.txt
> hadoop fs -chmod 775 /BSS
> {code}
> Change to a normal user called user2. 
> When trash is enabled:
> {code}
> sudo su user2 -
> hadoop fs -rm -r /BSS/level1
> 15/05/01 16:51:20 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion
interval = 3600 minutes, Emptier interval = 0 minutes.
> Moved: 'hdfs://bdvs323.svl.ibm.com:9000/BSS/level1' to trash at: hdfs://bdvs323.svl.ibm.com:9000/user/user2/.Trash/Current
> {code}
> When trash is disabled:
> {code}
> /opt/ibm/biginsights/IHC/bin/hadoop fs -Dfs.trash.interval=0 -rm -r /BSS/level1
> 15/05/01 16:58:31 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion
interval = 0 minutes, Emptier interval = 0 minutes.
> rm: Permission denied: user=user2, access=ALL, inode="/BSS/level1":user1:users:drwxr-x---
> {code}
> There is inconsistency between trash behavior and delete behavior.  When trash is enabled,
files owned by user1 is deleted by user2.  It looks like trash does not recursively validate
if the child directory files can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message