hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Liang Xie (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6617) Flake TestDFSZKFailoverController.testManualFailoverWithDFSHAAdmin due to a long edit log sync op
Date Thu, 03 Jul 2014 10:08:24 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14051286#comment-14051286

Liang Xie commented on HDFS-6617:

[~cnauroth], the above suggestion should be better definitely, i made a patch v2, i confirmed
this setting took effect with grep "FSEditLog.java:printStatistics" from log files w or w/o
patch, so i am pretty sure it will fix the failure testing caused by the slow edit log sync
operation:)  please help to review,  thank you!

> Flake TestDFSZKFailoverController.testManualFailoverWithDFSHAAdmin due to a long edit
log sync op
> -------------------------------------------------------------------------------------------------
>                 Key: HDFS-6617
>                 URL: https://issues.apache.org/jira/browse/HDFS-6617
>             Project: Hadoop HDFS
>          Issue Type: Test
>          Components: auto-failover, test
>    Affects Versions: 3.0.0
>            Reporter: Liang Xie
>            Assignee: Liang Xie
>            Priority: Minor
>         Attachments: HDFS-6617-v2.txt, HDFS-6617.txt
> Just Hit a false alarm testing while working at  HDFS-6614, see https://builds.apache.org/job/PreCommit-HDFS-Build/7259//testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestDFSZKFailoverController/testManualFailoverWithDFSHAAdmin/
> After a looking at the log, shows the failure came from a timeout at 
> ZKFailoverController.doCedeActive():
> localTarget.getProxy(conf, timeout).transitionToStandby(createReqInfo());
> While stopping active service, see FSNamesystem.stopActiveServices():
>   void stopActiveServices() {
>     LOG.info("Stopping services started for active state");
>     ....
> this corelates with the log:
> "2014-07-01 08:12:50,615 INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1167))
- Stopping services started for active state"
> then stopActiveServices will call editLog.close(), which goes to endCurrentLogSegment(),
see log:
> 2014-07-01 08:12:50,616 INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1216))
- Ending log segment 1
> but this operation did not finish in 5 seconds, then triggered the timeout:
> 2014-07-01 08:12:55,624 WARN  ha.ZKFailoverController (ZKFailoverController.java:doCedeActive(577))
- Unable to transition local node to standby: Call From asf001.sp2.ygridcore.net/
to localhost:10021 failed on socket timeout exception: java.net.SocketTimeoutException: 5000
millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
local=/ remote=localhost/]; For more details see:  http://wiki.apache.org/hadoop/SocketTimeout
> the logEdit/logSync finally done followed with printStatistics(true):
> 2014-07-01 08:13:05,243 INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(675))
- Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched
in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 14667 74 105 
> so obviously, this long sync contributed the timeout,  maybe the QA box is very slow
at that moment, so one possible fix here is setting the default fence timeout to a bigger

This message was sent by Atlassian JIRA

View raw message