hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinay (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE
Date Thu, 25 Oct 2012 05:34:12 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483910#comment-13483910
] 

Vinay commented on HDFS-3443:
-----------------------------

I have one more option to solve this without breaking the inheritance for BackupNode.

How about creating EditLogTailer instance inside constructor of FSNameSystem, because this
is used in both StandBy and Active states. We will start/stop the thread only in Standby as
usual.
{code}this.tailerThread = new EditLogTailerThread(){code}
Above initialization we will do in EditLogTrailer#start().

Since *editLogTrailer* is the only object which is initialized in standby state, and also
used in active state. So always order should be maintained. After above suggested fix, maintaining
the order not required.

                
> Unable to catch up edits during standby to active switch due to NPE
> -------------------------------------------------------------------
>
>                 Key: HDFS-3443
>                 URL: https://issues.apache.org/jira/browse/HDFS-3443
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: auto-failover
>            Reporter: suja s
>            Assignee: amith
>         Attachments: HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the activeservices start
to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
>     LOG.info("Starting services required for active state");
>     writeLock();
>     try {
>       FSEditLog editLog = dir.fsImage.getEditLog();
>       
>       if (!editLog.isOpenForWrite()) {
>         // During startup, we're already open for write during initialization.
>         editLog.initJournalsForWrite();
>         // May need to recover
>         editLog.recoverUnclosedStreams();
>         
>         LOG.info("Catching up to latest edits from old active before " +
>             "taking over writer role in edits logs.");
>         editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server Responder, call
org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 8 on 8020,
call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive from XX.XX.XX.55:58004: error:
java.lang.NullPointerException
> java.lang.NullPointerException
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
> 	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> 	at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
> 	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
> 	at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
> 	at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 8020
caught an exception
> java.nio.channels.ClosedChannelException
> 	at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
> 	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
> 	at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
> 	at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
> 	at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
> 	at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message