hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Henry Hung <YTHu...@winbond.com>
Subject (Solved) hadoop 2.2.0 QJM exception : NoClassDefFoundError: org/apache/hadoop/hdfs/server/namenode/FSImage
Date Thu, 13 Feb 2014 03:12:51 GMT
Dear All,

Sorry, I found the root cause for this problem, it appears that I overwrite the hadoop-hdfs-2.2.0.jar
with my own custom jar, but forgot to restart the journal node process,
so the process cannot find the FSImage class, but it actually there inside my custom jar.

Note to myself: make sure to shutdown all process before replacing the jar(s).

Best regards,
Henry

From: MA11 YTHung1
Sent: Thursday, February 13, 2014 10:49 AM
To: user@hadoop.apache.org
Subject: hadoop 2.2.0 QJM exception : NoClassDefFoundError: org/apache/hadoop/hdfs/server/namenode/FSImage

Hi All,

I don't know why the journal node logs has this weird "NoClassDefFoundError: org/apache/hadoop/hdfs/server/namenode/FSImage"
exception.
This error occurs each time I switch my namenode from standby to active

2014-02-13 10:34:47,873 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing
edits file /data/hadoop/hadoop-data/journal/hadoopdev/current/edits_inprogress_0000000000000133208
-> /data/hadoop/hadoop-data/journal/hadoopdev/current/edits_0000000000000133208-0000000000000133318
2014-02-13 10:36:38,492 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing
edits file /data/hadoop/hadoop-data/journal/hadoopdev64/current/edits_inprogress_0000000000000000281
-> /data/hadoop/hadoop-data/journal/hadoopdev64/current/edits_0000000000000000281-0000000000000000282
2014-02-13 10:36:51,118 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing
edits file /data/hadoop/hadoop-data/journal/hadoopdev/current/edits_inprogress_0000000000000133319
-> /data/hadoop/hadoop-data/journal/hadoopdev/current/edits_0000000000000133319-0000000000000133422
2014-02-13 10:38:38,755 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing
edits file /data/hadoop/hadoop-data/journal/hadoopdev64/current/edits_inprogress_0000000000000000283
-> /data/hadoop/hadoop-data/journal/hadoopdev64/current/edits_0000000000000000283-0000000000000000284
2014-02-13 10:38:54,620 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing
edits file /data/hadoop/hadoop-data/journal/hadoopdev/current/edits_inprogress_0000000000000133423
-> /data/hadoop/hadoop-data/journal/hadoopdev/current/edits_0000000000000133423-0000000000000133432
2014-02-13 10:40:27,543 INFO org.apache.hadoop.hdfs.qjournal.server.Journal: Updating lastPromisedEpoch
from 2 to 3 for client /10.18.30.155
2014-02-13 10:40:27,569 INFO org.apache.hadoop.hdfs.qjournal.server.Journal: Scanning storage
FileJournalManager(root=/data/hadoop/hadoop-data/journal/hadoopdev64)
2014-02-13 10:40:27,570 WARN org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8485, call
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.newEpoch from 10.18.30.155:35408
Call#339 Retry#0: error: java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/server/namenode/FSImage
java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/server/namenode/FSImage
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.validateEditLog(FSEditLogLoader.java:814)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.validateEditLog(EditLogFileInputStream.java:289)
        at org.apache.hadoop.hdfs.server.namenode.FileJournalManager$EditLogFile.validateLog(FileJournalManager.java:457)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.scanStorageForLatestEdits(Journal.java:189)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.newEpoch(Journal.java:301)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.newEpoch(JournalNodeRpcServer.java:132)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.newEpoch(QJournalProtocolServerSideTranslatorPB.java:114)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:17439)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
2014-02-13 10:40:58,074 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing
edits file /data/hadoop/hadoop-data/journal/hadoopdev/current/edits_inprogress_0000000000000133433
-> /data/hadoop/hadoop-data/journal/hadoopdev/current/edits_0000000000000133433-0000000000000133548



Below is the partial logs from namenode when it try to activate but failed abruptly:

2014-02-13 10:40:27,389 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping
services started for standby state
2014-02-13 10:40:27,390 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Edit
log tailer interrupted
java.lang.InterruptedException: sleep interrupted
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:334)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:279)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:296)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:292)
2014-02-13 10:40:27,393 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting
services required for active state
2014-02-13 10:40:27,403 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager:
Starting recovery process for unclosed journal segments...
2014-02-13 10:40:27,502 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments
failed for required journal (JournalAndStream(mgr=QJM to [10.18.30.151:8485, 10.18.30.152:8485,
10.18.30.153:8485], stream=null))
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve
quorum size 2/3. 1 successful responses:
10.18.30.153:8485: lastSegmentTxId: 285

2 exceptions thrown:
10.18.30.152:8485: org/apache/hadoop/hdfs/server/namenode/FSImage
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.validateEditLog(FSEditLogLoader.java:814)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.validateEditLog(EditLogFileInputStream.java:289)
        at org.apache.hadoop.hdfs.server.namenode.FileJournalManager$EditLogFile.validateLog(FileJournalManager.java:457)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.scanStorageForLatestEdits(Journal.java:189)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.newEpoch(Journal.java:301)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.newEpoch(JournalNodeRpcServer.java:132)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.newEpoch(QJournalProtocolServerSideTranslatorPB.java:114)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:17439)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)

10.18.30.151:8485: org/apache/hadoop/hdfs/server/namenode/FSImage
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.validateEditLog(FSEditLogLoader.java:814)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.validateEditLog(EditLogFileInputStream.java:289)
        at org.apache.hadoop.hdfs.server.namenode.FileJournalManager$EditLogFile.validateLog(FileJournalManager.java:457)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.scanStorageForLatestEdits(Journal.java:189)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.newEpoch(Journal.java:301)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.newEpoch(JournalNodeRpcServer.java:132)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.newEpoch(QJournalProtocolServerSideTranslatorPB.java:114)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:17439)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)

        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createNewUniqueEpoch(QuorumJournalManager.java:179)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnfinalizedSegments(QuorumJournalManager.java:420)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$7.apply(JournalSet.java:579)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:352)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:576)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1227)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:895)
        at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1434)
        at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
        at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
        at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1349)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1082)
        at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
        at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
2014-02-13 10:40:27,505 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2014-02-13 10:40:27,507 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at fphd5.ctpilot1.com/10.18.30.155
************************************************************/

________________________________
The privileged confidential information contained in this email is intended for use only by
the addressees as indicated by the original sender of this email. If you are not the addressee
indicated in this email or are not responsible for delivery of the email to such a person,
please kindly reply to the sender indicating this fact and delete all copies of it from your
computer and network server immediately. Your cooperation is highly appreciated. It is advised
that any unauthorized use of confidential information of Winbond is strictly prohibited; and
any information in this email irrelevant to the official business of Winbond shall be deemed
as neither given nor endorsed by Winbond.

________________________________
The privileged confidential information contained in this email is intended for use only by
the addressees as indicated by the original sender of this email. If you are not the addressee
indicated in this email or are not responsible for delivery of the email to such a person,
please kindly reply to the sender indicating this fact and delete all copies of it from your
computer and network server immediately. Your cooperation is highly appreciated. It is advised
that any unauthorized use of confidential information of Winbond is strictly prohibited; and
any information in this email irrelevant to the official business of Winbond shall be deemed
as neither given nor endorsed by Winbond.

Mime
View raw message