hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6651) Deletion failure can leak inodes permanently.
Date Wed, 28 Jan 2015 03:40:35 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294669#comment-14294669

Hadoop QA commented on HDFS-6651:

{color:green}+1 overall{color}.  Here are the results of testing the latest attachment 
  against trunk revision 18741ad.

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 5 new or modified
test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new Findbugs (version
2.0.3) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number
of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9353//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9353//console

This message is automatically generated.

> Deletion failure can leak inodes permanently.
> ---------------------------------------------
>                 Key: HDFS-6651
>                 URL: https://issues.apache.org/jira/browse/HDFS-6651
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Kihwal Lee
>            Assignee: Jing Zhao
>            Priority: Critical
>         Attachments: HDFS-6651.000.patch, HDFS-6651.001.patch
> As discussed in HDFS-6618, if a deletion of tree fails in the middle, any collected inodes
and blocks will not be removed from {{INodeMap}} and {{BlocksMap}}. 
> Since fsimage is saved by iterating over {{INodeMap}}, the leak will persist across name
node restart. Although blanked out inodes will not have reference to blocks, blocks will still
refer to the inode as {{BlockCollection}}. As long as it is not null, blocks will live on.
The leaked blocks from blanked out inodes will go away after restart.
> Options (when delete fails in the middle)
> - Complete the partial delete: edit log the partial delete and remove inodes and blocks.

> - Somehow undo the partial delete.
> - Check quota for snapshot diff beforehand for the whole subtree.
> - Ignore quota check during delete even if snapshot is present.

This message was sent by Atlassian JIRA

View raw message