hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7707) Edit log corruption due to delayed block removal again
Date Wed, 04 Feb 2015 14:12:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14305099#comment-14305099
] 

Hudson commented on HDFS-7707:
------------------------------

FAILURE: Integrated in Hadoop-Hdfs-trunk #2026 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2026/])
HDFS-7707. Edit log corruption due to delayed block removal again. Contributed by Yongjun
Zhang (kihwal: rev 843806d03ab1a24f191782f42eb817505228eb9f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeleteRace.java


> Edit log corruption due to delayed block removal again
> ------------------------------------------------------
>
>                 Key: HDFS-7707
>                 URL: https://issues.apache.org/jira/browse/HDFS-7707
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.6.0
>            Reporter: Yongjun Zhang
>            Assignee: Yongjun Zhang
>             Fix For: 2.7.0
>
>         Attachments: HDFS-7707.001.patch, HDFS-7707.002.patch, HDFS-7707.003.patch, reproduceHDFS-7707.patch
>
>
> Edit log corruption is seen again, even with the fix of HDFS-6825. 
> Prior to HDFS-6825 fix, if dirX is deleted recursively, an OP_CLOSE can get into edit
log for the fileY under dirX, thus corrupting the edit log (restarting NN with the edit log
would fail). 
> What HDFS-6825 does to fix this issue is, to detect whether fileY is already deleted
by checking the ancestor dirs on it's path, if any of them doesn't exist, then fileY is already
deleted, and don't put OP_CLOSE to edit log for the file.
> For this new edit log corruption, what I found was, the client first deleted dirX recursively,
then create another dir with exactly the same name as dirX right away.  Because HDFS-6825
count on the namespace checking (whether dirX exists in its parent dir) to decide whether
a file has been deleted, the newly created dirX defeats this checking, thus OP_CLOSE for the
already deleted file gets into the edit log, due to delayed block removal.
> What we need to do is to have a more robust way to detect whether a file has been deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message