hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joshua Tu <tujunxi...@live.com>
Subject RE: NN stopped and cannot recover with error "There appears to be a gap in the edit log"
Date Fri, 15 Nov 2013 07:54:06 GMT
I am using Cloudera CDH 4, the latest version of it. I didnt remove anything from the shell,
as I can recall that issue happened when I added some feature from the Cloudera Manager.
Any thought?

Best Regards, Joshua Tu

From: bharathvissapragada1990@gmail.com
Date: Fri, 15 Nov 2013 11:41:19 +0530
Subject: Re: NN stopped and cannot recover with error "There appears to be a gap in the edit
To: user@hadoop.apache.org

What is your hadoop version? Did you manually delete any files from the nn edits dir? Do you
see this gap in the file listing of edits directory too? Ideally all the txids appear consecutive
when you do a file listing in that dir.

On Fri, Nov 15, 2013 at 9:44 AM, Joshua Tu <tujunxiong@live.com> wrote:

Hi there,


I deployed a single node for testing, today the NN stopped and cannot start it with eror:
There appears to be a gap in the edit log.


2013-11-14 15:00:01,431 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
system shutdown complete.

2013-11-14 15:00:01,432 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in
namenode join

java.io.IOException: There appears to be a gap in the edit log.  We expected txid 8364, but
got txid 27381.

       at org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)

       at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:158)

       at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:92)

       at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:744)

       at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:660)

       at org.apache.hadoop.hdfs.server.namenode.FSImage.doUpgrade(FSImage.java:349)

       at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:261)

       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:639)

       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:476)

       at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)

       at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:437)

       at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:613)

       at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:598)

       at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169)

       at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1233)

2013-11-14 15:00:01,445 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

2013-11-14 15:00:01,448 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:


SHUTDOWN_MSG: Shutting down NameNode at ubcdh/



Since there is only one node so restore editlogs is not available, and hadoop namenode -recover
also not fit for this situation.


How can I fix this issue?



View raw message