hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1704) Throttling for HDFS Trash purging
Date Sat, 11 Aug 2007 00:29:42 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12519176

dhruba borthakur commented on HADOOP-1704:

In the current code, the namenode sends out only 1000 block deletion requests per heartbeat.
This means that the RecentInvalidateSets data structure could get bloated if tons of files
get deleted at the same instant. Deleting a file causes the file to get removed from the FsDirectory,
but blocks do not get removed from the blocksMap till the next block report from the datanode.

> Throttling for HDFS Trash purging
> ---------------------------------
>                 Key: HADOOP-1704
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1704
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: dhruba borthakur
> When HDFS Trash is enabled, deletion of a file/directory results in it being moved to
the "Trash" directory. The "Trash" directory is periodically purged by the Namenode. This
means that all files/directories that users deleted in the last Trash period, gets "really"
deleted when the Trash purging occurs. This might cause a burst of file/directory deletions.
> The Namenode tracks blocks that belonged to deleted files in a data structure named "RecentInvalidateSets".
There is a possibility that Trash purging may cause this data structure to bloat, causing
undesireable behaviour of the Namenode.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message