hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3605) Block mistakenly marked corrupt during edit log catchup phase of failover
Date Sat, 14 Jul 2012 23:36:34 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13414546#comment-13414546

Todd Lipcon commented on HDFS-3605:

TestPipelinesFailover is probably related. I'll look into that.
> Block mistakenly marked corrupt during edit log catchup phase of failover
> -------------------------------------------------------------------------
>                 Key: HDFS-3605
>                 URL: https://issues.apache.org/jira/browse/HDFS-3605
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: ha, name-node
>    Affects Versions: 2.0.0-alpha, 2.0.1-alpha
>            Reporter: Brahma Reddy Battula
>            Assignee: Todd Lipcon
>         Attachments: HDFS-3605.patch, TestAppendBlockMiss.java, hdfs-3605.txt
> Open file for append
> Write data and sync.
> After next log roll and editlog tailing in standbyNN close the append stream.
> Call append multiple times on the same file, before next editlog roll.
> Now abruptly kill the current active namenode.
> Here block is missed..
> this may be because of All latest blocks were queued in StandBy Namenode. 
> During failover, first OP_CLOSE was processing the pending queue and adding the block
to corrupted block. 

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message