hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chao Sun (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HDFS-13145) SBN crash when transition to ANN with in-progress edit tailing enabled
Date Wed, 21 Feb 2018 19:56:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16371921#comment-16371921
] 

Chao Sun edited comment on HDFS-13145 at 2/21/18 7:55 PM:
----------------------------------------------------------

Did more research on this issue. I think this issue happens in the following order:
 1. Active NN exits right before flushing the dummy batch. Also, because of the abrupt exit
(in our case, it exited with SIGNAL 15: SIGTERM, even when we call {{hadoop-daemon.sh stop
namenode}}), it will not stop the active services, and most importantly, will not finalize
the current log segment. As result, the {{committedTxnId}} on the remote journal will be
less than the {{endTxId}}.
2. When the SBN take over, it will [catch up to the latest edits from old active|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L1214].
When selecting input stream, it will set the {{endTxId}} to be {{committedTxnId}} if the latter
is smaller: 
{code:java}
        // If it's bounded by durable Txns, endTxId could not be larger
        // than committedTxnId. This ensures the consistency.
        if (onlyDurableTxns && inProgressOk) {
          endTxId = Math.min(endTxId, committedTxnId);
          if (endTxId < remoteLog.getStartTxId()) {
            LOG.warn("Found endTxId (" + endTxId + ") that is less than " +
                "the startTxId (" + remoteLog.getStartTxId() +
                ") - setting it to startTxId.");
            endTxId = remoteLog.getStartTxId();
          }
        }
    {code}

 3. After catching up all edits, the SBN [set the nextTnId to be endTxId + 1|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L1238]{{}}
and call {{FSEditLog::openForWrite}}, which will select all input streams from remote journals
using the {{nextTnId}} as the starting transaction ID. The result will be non-empty because
of the issue in 1). 
4. Exception is thrown because of 3).

[~xkrogen]: I could be wrong but I think the step 1) could still happen even if we get rid
of the dummy patch, unless we constantly making sure that the {{committedTxnId}} is up-to-date
with the {{endTxId}}. One potential fix would be to *not* tail in-progress log when the
SBN catches up the latest edits in failover. This seems to me benign.The change is simple
and I've tested in our dev environment. 


was (Author: csun):
Did more research on this issue. I think this issue happens in the following order:
 1. Active NN exits right before flushing the dummy batch. Also, because of the abrupt exit
(in our case, it exited with SIGNAL 15: SIGTERM, even when we call {{hadoop-daemon.sh stop
namenode}}), it will not stop the active services, and most importantly, will not finalize
the current log segment. As result, the {{committedTxnId}} on the remote journal will be
less than the {{endTxId}}.
2. When the SBN take over, it will [catch up to the latest edits from old active|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L1214].
When selecting input stream, it will set the {{endTxId}} to be {{committedTxnId}} if the latter
is smaller: 
{code:java}
        // If it's bounded by durable Txns, endTxId could not be larger
        // than committedTxnId. This ensures the consistency.
        if (onlyDurableTxns && inProgressOk) {
          endTxId = Math.min(endTxId, committedTxnId);
          if (endTxId < remoteLog.getStartTxId()) {
            LOG.warn("Found endTxId (" + endTxId + ") that is less than " +
                "the startTxId (" + remoteLog.getStartTxId() +
                ") - setting it to startTxId.");
            endTxId = remoteLog.getStartTxId();
          }
        }
    {code}

 3. After catching up all edits, the SBN [set the nextTnId to be endTxId + 1|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L1238]{{}}
and call {{FSEditLog::openForWrite}}, which will select all input streams from remote journals
using the {{nextTnId}} as the starting transaction ID. The result will be non-empty because
of the issue in 1). 
4. Exception is thrown because of 3).

 

[~xkrogen]: I could be wrong but I think the step 1) could still happen even if we get rid
of the dummy patch, unless we constantly making sure that the {{committedTxnId}} is up-to-date
with the {{endTxId}}. One potential fix would be to *not* tail in-progress log when the
SBN catches up the latest edits in failover. This seems to me benign.The change is simple
and I've tested in our dev environment. 

> SBN crash when transition to ANN with in-progress edit tailing enabled
> ----------------------------------------------------------------------
>
>                 Key: HDFS-13145
>                 URL: https://issues.apache.org/jira/browse/HDFS-13145
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: ha, namenode
>            Reporter: Chao Sun
>            Assignee: Chao Sun
>            Priority: Major
>
> With edit log in-progress edit log tailing enabled, {{QuorumOutputStream}} will send
two batches to JNs, one normal edit batch followed by a dummy batch to update the commit ID
on JNs.
> {code}
>       QuorumCall<AsyncLogger, Void> qcall = loggers.sendEdits(
>           segmentTxId, firstTxToFlush,
>           numReadyTxns, data);
>       loggers.waitForWriteQuorum(qcall, writeTimeoutMs, "sendEdits");
>       
>       // Since we successfully wrote this batch, let the loggers know. Any future
>       // RPCs will thus let the loggers know of the most recent transaction, even
>       // if a logger has fallen behind.
>       loggers.setCommittedTxId(firstTxToFlush + numReadyTxns - 1);
>       // If we don't have this dummy send, committed TxId might be one-batch
>       // stale on the Journal Nodes
>       if (updateCommittedTxId) {
>         QuorumCall<AsyncLogger, Void> fakeCall = loggers.sendEdits(
>             segmentTxId, firstTxToFlush,
>             0, new byte[0]);
>         loggers.waitForWriteQuorum(fakeCall, writeTimeoutMs, "sendEdits");
>       }
> {code}
> Between each batch, it will wait for the JNs to reach a quorum. However, if the ANN crashes
in between, then SBN will crash while transiting to ANN:
> {code}
> java.lang.IllegalStateException: Cannot start writing at txid 24312595802 when there
is a stream available for read: ......
>         at org.apache.hadoop.hdfs.server.namenode.FSEditLog.openForWrite(FSEditLog.java:329)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1196)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1839)
>         at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>         at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
>         at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1707)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1622)
>         at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>         at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:851)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:794)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2490)
> 2018-02-13 00:43:20,728 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> {code}
> This is because without the dummy batch, the {{commitTxnId}} will lag behind the {{endTxId}},
which caused the check in {{openForWrite}} to fail:
> {code}
>     List<EditLogInputStream> streams = new ArrayList<EditLogInputStream>();
>     journalSet.selectInputStreams(streams, segmentTxId, true, false);
>     if (!streams.isEmpty()) {
>       String error = String.format("Cannot start writing at txid %s " +
>         "when there is a stream available for read: %s",
>         segmentTxId, streams.get(0));
>       IOUtils.cleanupWithLogger(LOG,
>           streams.toArray(new EditLogInputStream[0]));
>       throw new IllegalStateException(error);
>     }
> {code}
> In our environment, this can be reproduced pretty consistently, which will leave the
cluster with no running namenodes. Even though we are using a 2.8.2 backport, I believe the
same issue also exist in 3.0.x. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message