hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Milind Bhandarkar (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-771) Namenode should return error when trying to delete non-empty directory
Date Fri, 18 May 2007 19:27:16 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12497009
] 

Milind Bhandarkar commented on HADOOP-771:
------------------------------------------

>> On the other hand, won't an RPC per file cause a lot of namenode traffic?

yes, it would. But only when user intentionally deletes all his/her data, which I believe
happens rarely.

I think, DFS is optimized for a small number of large files, rather than large number of small
files, so in practice, this erasure of entire tree should not happen frequently.

> Namenode should return error when trying to delete non-empty directory
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-771
>                 URL: https://issues.apache.org/jira/browse/HADOOP-771
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: dfs
>    Affects Versions: 0.8.0
>         Environment: all
>            Reporter: Milind Bhandarkar
>         Assigned To: Sameer Paranjpye
>
> Currently, the namenode.delete() method allows recursive deletion of a directory. That
is, even a non-empty directory could be deleted using namenode.delete(). To avoid costly programmer
errors, the namenode should not remove the non-empty directories in this method. Recursively
deleting directory should either be performed with listPaths() followed by a delete() for
every path, or with a specific namenode method such as deleteRecursive().

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message