hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Oleksandr Shevchenko (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-15725) FileSystem.deleteOnExit should check user permissions
Date Mon, 10 Sep 2018 13:37:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-15725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609199#comment-16609199
] 

Oleksandr Shevchenko commented on HADOOP-15725:
-----------------------------------------------

Thank you [~stevel@apache.org] .

{quote}
1. FS.deleteOnExist() Calls FileSystem.delete(), so has exactly the same permissions as the
delete() call for that FS instance.
{quote}
In this case not exactly the same. Let me explain.
When we call FileSystem.delete() we check permissions for a user who call delete() method.
When we call FS.deleteOnExist() from user Alice we do not check Alice's permissions for this
file. Later when we call FS.close() (or JVM is shutdown) by user Bob we call delete for all
files in deleteOnExit list and check Bob's permissions for each file (but Alice marked this
files for deletion).

{quote}
2. therefore you cannot do more in deleteOnExit that you can from exactly the same FS instance
through FileSystem.delete()
{quote}
It's a problem. We can do more that FileSystem.delete() call from another user if we marked
files for deletion.

I think it's not correct behavior:
1. Create FS behalf of user Bob.
2. Mark some file for delete behalf of user Alice (Alice do not have enough permissions to
delete this file)
3. Close FS behalf of user Bob.
4. The file will be deleted behalf of user Bob.

Correct me if I missed something. Or if something is not clear, please tell me and I will
provide better clarification or some another example.

> FileSystem.deleteOnExit should check user permissions
> -----------------------------------------------------
>
>                 Key: HADOOP-15725
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15725
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Oleksandr Shevchenko
>            Priority: Major
>              Labels: Security
>         Attachments: deleteOnExitReproduce
>
>
> For now, we able to add any file to FileSystem deleteOnExit list. It leads to security
problems. Some user (Intruder) can get file system instance which was created by another user
(Owner) and mark any files to delete even if "Intruder" doesn't have any access to this files.
Later when "Owner" invoke close method (or JVM is shut down since we have ShutdownHook which
able to close all file systems) marked files will be deleted successfully since deleting was
do behalf of "Owner" (or behalf of a user who ran a program).
> I attached the patch [^deleteOnExitReproduce] which reproduces this possibility and
also I able to reproduce it on a cluster with both Local and Distributed file systems:
> {code:java}
> public class Main {
> public static void main(String[] args) throws Exception {
> final FileSystem fs;
>  Configuration conf = new Configuration();
>  conf.set("fs.default.name", "hdfs://node:9000");
>  conf.set("fs.hdfs.impl",
>  org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()
>  );
>  fs = FileSystem.get(conf);
>  System.out.println(fs);
> Path f = new Path("/user/root/testfile");
>  System.out.println(f);
> UserGroupInformation hive = UserGroupInformation.createRemoteUser("hive");
> hive.doAs((PrivilegedExceptionAction<Boolean>) () -> fs.deleteOnExit(f));
> fs.close();
>  }
> {code}
> Result:
> {noformat}
> root@node:/# hadoop fs -put testfile /user/root
> root@node:/# hadoop fs -chmod 700 /user/root/testfile
> root@node:/# hadoop fs -ls /user/root
> Found 1 items
> -rw------- 1 root supergroup 0 2018-09-06 18:07 /user/root/testfile
> root@node:/# java -jar testDeleteOther.jar 
> log4j:WARN No appenders could be found for logger (org.apache.hadoop.conf.Configuration.deprecation).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
> DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_309539034_1, ugi=root (auth:SIMPLE)]]
> /user/root/testfile
> []
> root@node:/# hadoop fs -ls /user/root
> root@node:/# 
> {noformat}
> We should add a check user permissions before mark a file to delete. 
>  Could someone evaluate this? And if no one objects I would like to start working on
this.
>  Thanks a lot for any comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message