hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4128) 2NN gets stuck in inconsistent state if edit log replay fails in the middle
Date Sun, 24 Feb 2013 16:12:12 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13585405#comment-13585405
] 

Kihwal Lee commented on HDFS-4128:
----------------------------------

I've seen similar issues due to a quota bug, which I have yet to reproduce. NN allows users
to go a bit over quota due to this bug. On 2NN, since updateCountForINodeWithQuota() is called
every time it checkpoints, any such error is corrected, but quota violations allowed by NN
will still be there. (WARN is printed on 2NN)  If replaying of edits on top of this namespace
state meets a quota exception, it gets into the unrecoverable loop of retry. 

It seems that 2NN state needs to be reset when an exception occurs during replaying edits.
Then 2NN will load everything from scratch and create an accurate state without losing any
edits. We will still see WARN messages revealing bugs that cause NN state to be inaccurate.

Since 2NN retries every minute, it can cause generation of a huge number of edit files. 2NN
should give up after  rolling and creating new edit, say, 100 times. This is related to HDFS-280,
but in this case we do not download everything all over again.


                
> 2NN gets stuck in inconsistent state if edit log replay fails in the middle
> ---------------------------------------------------------------------------
>
>                 Key: HDFS-4128
>                 URL: https://issues.apache.org/jira/browse/HDFS-4128
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.0.2-alpha
>            Reporter: Todd Lipcon
>
> We saw the following issue in a cluster:
> - The 2NN downloads an edit log segment:
> {code}
> 2012-10-29 12:30:57,433 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
/xxxxxxx/current/edits_0000000000049136809-0000000000049176162 expecting start txid #49136809
> {code}
> - It fails in the middle of replay due to an OOME:
> {code}
> 2012-10-29 12:31:21,021 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader:
Encountered exception on operation AddOp [length=0, path=/xxxxxxxx
> java.lang.OutOfMemoryError: Java heap space
> {code}
> - Future checkpoints then fail because the prior edit log replay only got halfway through
the stream:
> {code}
> 2012-10-29 12:32:21,214 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
/xxxxx/current/edits_0000000000049176163-0000000000049177224 expecting start txid #49144432
> 2012-10-29 12:32:21,216 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
Exception in doCheckpoint
> java.io.IOException: There appears to be a gap in the edit log.  We expected txid 49144432,
but got txid 49176163.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message