hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinod K V (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-6631) FileUtil.fullyDelete() should continue to delete other files despite failure at any level.
Date Tue, 04 May 2010 11:14:58 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12863759#action_12863759
] 

Vinod K V commented on HADOOP-6631:
-----------------------------------

I have one comment on the patch. When we list the files in a dir, and then try deleting *a
file* and fail, we return from the method. This may be right when the parent directory itself
is non-writable because then going on with other files/dirs is useless anyways. But I quickly
checked the man page for unlink on Linux and realized that delete of a file can fail when
 - write permissions are denied on the parent dir
 - the file is being used by some other process
 - the file doesn't exist anymore
 - or the file is on a read-only filesystem.

The current solution is enough for the 1st and 4th cases. The 2nd and 3rd are really possible,
and so should be handled gracefully as well by proceeding to the delete of other files/dirs
in the parent-dir. To optimize the non-writable directory case, we may want to do a check
if the parent-dir is writable or not in the beginning itself.

> FileUtil.fullyDelete() should continue to delete other files despite failure at any level.
> ------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-6631
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6631
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs, util
>            Reporter: Vinod K V
>            Assignee: Ravi Gummadi
>             Fix For: 0.22.0
>
>         Attachments: hadoop-6631-y20s-1.patch, hadoop-6631-y20s-2.patch, HADOOP-6631.patch,
HADOOP-6631.patch
>
>
> Ravi commented about this on HADOOP-6536. Paraphrasing...
> Currently FileUtil.fullyDelete(myDir) comes out stopping deletion of other files/directories
if it is unable to delete a file/dir(say because of not having permissions to delete that
file/dir) anywhere under myDir. This is because we return from method if the recursive call
"if(!fullyDelete()) {return false;}" fails at any level of recursion.
> Shouldn't it continue with deletion of other files/dirs continuing in the for loop instead
of returning false here ?
> I guess fullyDelete() should delete as many files as possible(similar to 'rm -rf').

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message