hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5825) Recursively deleting a directory with millions of files makes NameNode unresponsive for other commands until the deletion completes
Date Mon, 18 May 2009 05:13:45 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12710275#action_12710275
] 

dhruba borthakur commented on HADOOP-5825:
------------------------------------------

Is it possible that FSNamesystem.removePathAndBlocks() is the major bottleneck? If so, we
can maybe re-arrange the code to keep the "freeing up resource" part of the code outside the
FSNamesystem lock.

> Recursively deleting a directory with millions of files makes NameNode unresponsive for
other commands until the deletion completes
> -----------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5825
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5825
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Suresh Srinivas
>            Assignee: Suresh Srinivas
>
> Delete a directory with millions of files. This could take several minutes (observed
12 mins for 9 million files). While the operation is in progress FSNamesystem lock is held
and the requests from clients are not handled until deletion completes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message