From dev-return-77110-archive-asf-public=cust-asf.ponee.io@zookeeper.apache.org Fri Jan 11 09:35:10 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 63D6E180648 for ; Fri, 11 Jan 2019 09:35:10 +0100 (CET) Received: (qmail 30365 invoked by uid 500); 11 Jan 2019 08:35:04 -0000 Mailing-List: contact dev-help@zookeeper.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@zookeeper.apache.org Delivered-To: mailing list dev@zookeeper.apache.org Received: (qmail 30354 invoked by uid 99); 11 Jan 2019 08:35:04 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 11 Jan 2019 08:35:04 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id D4D6C1808AD for ; Fri, 11 Jan 2019 08:35:03 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -109.501 X-Spam-Level: X-Spam-Status: No, score=-109.501 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, KAM_ASCII_DIVIDERS=0.8, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id WCgX0bt5jzwE for ; Fri, 11 Jan 2019 08:35:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id C76825F485 for ; Fri, 11 Jan 2019 08:35:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 016D0E2655 for ; Fri, 11 Jan 2019 08:35:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 394A72558B for ; Fri, 11 Jan 2019 08:35:00 +0000 (UTC) Date: Fri, 11 Jan 2019 08:35:00 +0000 (UTC) From: "Michael Han (JIRA)" To: dev@zookeeper.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (ZOOKEEPER-3240) Close socket on Learner shutdown to avoid dangling socket MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/ZOOKEEPER-3240?page=3Dcom.atlas= sian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D= 16740161#comment-16740161 ]=20 Michael Han commented on ZOOKEEPER-3240: ---------------------------------------- [~nixon] Good catch, the fix looks reasonable.=20 I've seen similar issue in my production environment, the fix I did was on = Leader side where I tracked the LearnerHandler threads associated with serv= er ids, and make sure each server id only has a single LearnerHandler threa= d. This also work in cases where the learners don't have a chance to close = their sockets, or they did but due to some reasons the TCP reset never made= it to leader. But in any case, it's good to fix the resource leaking on le= arner side. I also wonder why we could get into such case on Leader side in first place= . On leader, we do have socket read timeout set via setSoTimeout for leaner= handler threads (after the socket was created via serverSocket.accept), an= d each learner handler would constantly polling / trying read from the sock= et afterwards. If, on a learner it dies but left a valid socket open, I was= expecting one leader side the LearnerHandler thread that trying to read fr= om that died learner socket will eventually timeout, which, will throw Sock= etTimeOutException and cause the LearnerHandler thread on the leader kill i= tself. This though does not seem to be the case I observed. Do you have any= insights on this? > Close socket on Learner shutdown to avoid dangling socket > --------------------------------------------------------- > > Key: ZOOKEEPER-3240 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3240 > Project: ZooKeeper > Issue Type: Improvement > Components: server > Affects Versions: 3.6.0 > Reporter: Brian Nixon > Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > There was a Learner that had two connections to the Leader after that Lea= rner hit an unexpected exception during flush txn to disk, which will shutd= own previous follower instance and restart a new one. > =C2=A0 > {quote}2018-10-26 02:31:35,568 ERROR [SyncThread:3:ZooKeeperCriticalThrea= d@48] - Severe unrecoverable error, from thread : SyncThread:3 > java.io.IOException: Input/output error > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at java.base/sun.nio.ch.FileDi= spatcherImpl.force0(Native Method) > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at java.base/sun.nio.ch.FileDi= spatcherImpl.force(FileDispatcherImpl.java:72) > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at java.base/sun.nio.ch.FileCh= annelImpl.force(FileChannelImpl.java:395) > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.zookeeper.server= .persistence.FileTxnLog.commit(FileTxnLog.java:457) > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.zookeeper.server= .persistence.FileTxnSnapLog.commit(FileTxnSnapLog.java:548) > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.zookeeper.server= .ZKDatabase.commit(ZKDatabase.java:769) > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.zookeeper.server= .SyncRequestProcessor.flush(SyncRequestProcessor.java:246) > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.zookeeper.server= .SyncRequestProcessor.run(SyncRequestProcessor.java:172) > 2018-10-26 02:31:35,568 INFO=C2=A0 [SyncThread:3:ZooKeeperServerListenerI= mpl@42] - Thread SyncThread:3 exits, error code 1 > 2018-10-26 02:31:35,568 INFO [SyncThread:3:SyncRequestProcessor@234] - Sy= ncRequestProcessor exited!{quote} > =C2=A0 > It is supposed to close the previous socket, but it doesn't seem to be do= ne anywhere in the code. This leaves the socket open with no one reading fr= om it, and caused the queue full and blocked on sender. > =C2=A0 > Since the LearnerHandler didn't shutdown gracefully, the learner queue si= ze keeps growing, the JVM heap size on leader keeps growing and added press= ure to the GC, and cause high GC time and latency in the quorum. > =C2=A0 > The simple fix is to gracefully shutdown the socket. -- This message was sent by Atlassian JIRA (v7.6.3#76005)