hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "gary murry (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-740) rm and rmr can accidently delete user's data
Date Wed, 28 Oct 2009 19:56:59 GMT

    [ https://issues.apache.org/jira/browse/HDFS-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12771084#action_12771084

gary murry commented on HDFS-740:

Steps to Reproduce:
1) Make sure fs.trash.interval in core-site.xml is set to some positive number
2) Copy a large file to your hdfs
3) Set a low size quota for your hdfs
4) Do a rm of the large file

A message comes up saying, that the directory could not be created in the trash, but that
the file was deleted.  

If the file fails to move to trash, then it should not be deleted.

> rm and rmr can accidently delete user's data
> --------------------------------------------
>                 Key: HDFS-740
>                 URL: https://issues.apache.org/jira/browse/HDFS-740
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.20.1
>            Reporter: gary murry
> With trash turned on, if a user is over his quota and does a rm (or rmr), the file is
deleted without a copy being placed in the trash.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message