hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE
Date Sun, 20 May 2012 06:20:41 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13279674#comment-13279674
] 

Todd Lipcon commented on HDFS-3443:
-----------------------------------

Just to clarify, this is a bug with HA in general, not specifically auto failover, right?
i.e. if a user manually triggered failover as the NN was starting up, the same problem would
occur, I think.
                
> Unable to catch up edits during standby to active switch due to NPE
> -------------------------------------------------------------------
>
>                 Key: HDFS-3443
>                 URL: https://issues.apache.org/jira/browse/HDFS-3443
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: auto-failover
>            Reporter: suja s
>            Assignee: Uma Maheswara Rao G
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the activeservices start
to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
>     LOG.info("Starting services required for active state");
>     writeLock();
>     try {
>       FSEditLog editLog = dir.fsImage.getEditLog();
>       
>       if (!editLog.isOpenForWrite()) {
>         // During startup, we're already open for write during initialization.
>         editLog.initJournalsForWrite();
>         // May need to recover
>         editLog.recoverUnclosedStreams();
>         
>         LOG.info("Catching up to latest edits from old active before " +
>             "taking over writer role in edits logs.");
>         editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server Responder, call
org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 8 on 8020,
call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive from XX.XX.XX.55:58004: error:
java.lang.NullPointerException
> java.lang.NullPointerException
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
> 	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> 	at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
> 	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
> 	at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
> 	at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 8020
caught an exception
> java.nio.channels.ClosedChannelException
> 	at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
> 	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
> 	at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
> 	at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
> 	at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
> 	at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message