From dev-return-82052-archive-asf-public=cust-asf.ponee.io@zookeeper.apache.org Tue Jul 23 16:49:03 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id 8BC601802C7 for ; Tue, 23 Jul 2019 18:49:03 +0200 (CEST) Received: (qmail 1073 invoked by uid 500); 23 Jul 2019 16:49:02 -0000 Mailing-List: contact dev-help@zookeeper.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@zookeeper.apache.org Delivered-To: mailing list dev@zookeeper.apache.org Received: (qmail 1011 invoked by uid 99); 23 Jul 2019 16:49:02 -0000 Received: from mailrelay1-us-west.apache.org (HELO mailrelay1-us-west.apache.org) (209.188.14.139) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 23 Jul 2019 16:49:01 +0000 Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id C6B85E2F2C for ; Tue, 23 Jul 2019 16:49:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 2C69E265CF for ; Tue, 23 Jul 2019 16:49:00 +0000 (UTC) Date: Tue, 23 Jul 2019 16:49:00 +0000 (UTC) From: "Jan-Philip Gehrcke (JIRA)" To: dev@zookeeper.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (ZOOKEEPER-3466) ZK cluster converges, but does not properly handle client connections (new in 3.5.5) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 Jan-Philip Gehrcke created ZOOKEEPER-3466: --------------------------------------------- Summary: ZK cluster converges, but does not properly handle cl= ient connections (new in 3.5.5) Key: ZOOKEEPER-3466 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3466 Project: ZooKeeper Issue Type: Bug Affects Versions: 3.5.5 Environment: Linux Reporter: Jan-Philip Gehrcke Hey, we explore switching from ZooKeeper 3.4.14 to ZooKeeper 3.5.5 in [DC/O= S|[https://github.com/dcos/dcos]]. DC/OS coordinates ZooKeeper via Exhibitor. We are not changing anything w.r= .t. Exhibitor for now, and are hoping that we can use ZooKeeper 3.5.5 as a = drop-in replacement for 3.4.14. This seems to work fine=C2=A0when Exhibitor= uses a so-called static ensemble where the individual ZooKeeper instances = are known a priori. When Exhibitor however discovers individual ZooKeeper instances ("dynamic" = back-end) then I think we observe a regression where ZooKeeper 3.5.5 can ge= t into the following bad state (often, but not always): # three ZooKeeper instances find each other, leader election takes place= =C2=A0(*expected*) # leader election succeeds: two followers, one leader=C2=A0(*expected*) # all three ZK instances respond IAMOK to RUOK=C2=A0=C2=A0(*expected*) # all three ZK instances respond to SRVR (one says "Mode: leader", the oth= er two say "Mode: follower")=C2=A0=C2=A0(*expected*) # all three ZK instances respond to MNTR and show plausible output (*expec= ted*) # *{color:#FF0000}Unexpected:{color}* any ZooKeeper client trying to conne= ct to any of the three nodes observes a "connection timeout", whereas notab= ly this is *not* a TCP connect() timeout. The TCP connect() succeeds, but t= hen ZK does not seem to send the bytes to the TCP connection, and the ZK cl= ients wait for them via recv() until it hits a timeout condition. Examples = for two different clients: ## In Kazoo we specifically hit Connection time-out: socket time-out during readgenerated here:=C2=A0[https= ://github.com/python-zk/kazoo/blob/88b657a0977161f3815657878ba48f82a97a3846= /kazoo/protocol/connection.py#L249] ## In zkCli we see=C2=A0 Client session timed out, have not heard from server in 15003ms for session= id 0x0, closing socket connection and attempting reconnect (org.apache.zook= eeper.ClientCnxn:main-SendThread(localhost:2181)) # This state is stable, it will last forever (well, at least for multiple = hours and we didn't test longer than that). # In our system the ZooKeeper clients are crash-looping. They retry. What = I have observed is that while they retry the ZK ensemble accumulates outsta= nding requests, here shown from MNTR output:=C2=A0 zk_packets_received=092008 zk_packets_sent=09127 zk_num_alive_connections=0918 zk_outstanding_requests=091880 # The leader emits log lines confirming session timeout, example: {code:java} [myid:3] INFO [SessionTracker:ZooKeeperServer@398] - Expiring session 0x200= 0642b18f0020, timeout of 10000ms exceeded [myid:3] INFO [SessionTracker:QuorumZooKeeperServer@157] - Submitting globa= l closeSession request for session 0x2000642b18f0020{code} # In this state, restarting any one of the two ZK followers results in the= same state (clients don't get data from ZK upon connect). # In this state, restarting the ZK leader, and therefore triggering a lead= er re-election, almost immediately results in all clients being able to con= nect to all ZK instances successfully. -- This message was sent by Atlassian JIRA (v7.6.14#76016)