hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daryn Sharp (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4128) 2NN gets stuck in inconsistent state if edit log replay fails in the middle
Date Fri, 01 Mar 2013 14:53:14 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13590586#comment-13590586

Daryn Sharp commented on HDFS-4128:

Patch looks straightforward.  A few comments:
* Default of 60 retries seems a bit high, esp. for a cluster with a large image and edits.
 I'd think something like the standard 3 would be sufficient because if the 2NN aborts it
will get the attention faster of the admin.
* I think ">= maxRetries" should be ">" because ">=" means attempts instead of retries.
 Ie. Currently if I specify 1 max retry, it doesn't retry at all.
* Given the former issue, there should be a test that maxRetries is actually honored.
* Spelling police told me "occured" should be "occurred" :)

> 2NN gets stuck in inconsistent state if edit log replay fails in the middle
> ---------------------------------------------------------------------------
>                 Key: HDFS-4128
>                 URL: https://issues.apache.org/jira/browse/HDFS-4128
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.0.2-alpha
>            Reporter: Todd Lipcon
>            Assignee: Kihwal Lee
>         Attachments: hdfs-4128.patch
> We saw the following issue in a cluster:
> - The 2NN downloads an edit log segment:
> {code}
> 2012-10-29 12:30:57,433 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
/xxxxxxx/current/edits_0000000000049136809-0000000000049176162 expecting start txid #49136809
> {code}
> - It fails in the middle of replay due to an OOME:
> {code}
> 2012-10-29 12:31:21,021 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader:
Encountered exception on operation AddOp [length=0, path=/xxxxxxxx
> java.lang.OutOfMemoryError: Java heap space
> {code}
> - Future checkpoints then fail because the prior edit log replay only got halfway through
the stream:
> {code}
> 2012-10-29 12:32:21,214 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading
/xxxxx/current/edits_0000000000049176163-0000000000049177224 expecting start txid #49144432
> 2012-10-29 12:32:21,216 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
Exception in doCheckpoint
> java.io.IOException: There appears to be a gap in the edit log.  We expected txid 49144432,
but got txid 49176163.
> {code}

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message