hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinod K V (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-6631) FileUtil.fullyDelete() should continue to delete other files despite failure at any level.
Date Wed, 05 May 2010 15:24:03 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Vinod K V updated HADOOP-6631:
------------------------------

    Attachment: HADOOP-6631-20100505.txt

Updated patch with testcase verifying non-deletable files also.

> FileUtil.fullyDelete() should continue to delete other files despite failure at any level.
> ------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-6631
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6631
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs, util
>            Reporter: Vinod K V
>            Assignee: Ravi Gummadi
>             Fix For: 0.22.0
>
>         Attachments: HADOOP-6631-20100505.txt, hadoop-6631-y20s-1.patch, hadoop-6631-y20s-2.patch,
HADOOP-6631.patch, HADOOP-6631.patch, HADOOP-6631.v1.patch
>
>
> Ravi commented about this on HADOOP-6536. Paraphrasing...
> Currently FileUtil.fullyDelete(myDir) comes out stopping deletion of other files/directories
if it is unable to delete a file/dir(say because of not having permissions to delete that
file/dir) anywhere under myDir. This is because we return from method if the recursive call
"if(!fullyDelete()) {return false;}" fails at any level of recursion.
> Shouldn't it continue with deletion of other files/dirs continuing in the for loop instead
of returning false here ?
> I guess fullyDelete() should delete as many files as possible(similar to 'rm -rf').

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message