Return-Path: X-Original-To: apmail-curator-user-archive@minotaur.apache.org Delivered-To: apmail-curator-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6571210D8B for ; Fri, 7 Feb 2014 22:31:47 +0000 (UTC) Received: (qmail 69541 invoked by uid 500); 7 Feb 2014 22:31:46 -0000 Delivered-To: apmail-curator-user-archive@curator.apache.org Received: (qmail 69448 invoked by uid 500); 7 Feb 2014 22:31:46 -0000 Mailing-List: contact user-help@curator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@curator.apache.org Delivered-To: mailing list user@curator.apache.org Received: (qmail 69440 invoked by uid 99); 7 Feb 2014 22:31:46 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Feb 2014 22:31:46 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy includes SPF record at spf.trusted-forwarder.org) Received: from [209.85.213.45] (HELO mail-yh0-f45.google.com) (209.85.213.45) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Feb 2014 22:31:40 +0000 Received: by mail-yh0-f45.google.com with SMTP id i57so3173373yha.4 for ; Fri, 07 Feb 2014 14:31:18 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:references:mime-version:in-reply-to:content-type :content-transfer-encoding:message-id:cc:from:subject:date:to; bh=f0G5brC43YsGsOR0pZje3nq6F2IolUB1Lknv9Kc4780=; b=C+oM36a8iTG7zKcZ37yGgiFxnzeLbDCDapu3eg+nbLNROhSAJRH7JWtT4tD/Gi++7E LwcCr8H6g+jYzzYZJh643hIVJJCFqoKq9sZQDc4R/ez6h/k1i2MXlns1OYzQSCMUmm3M I8mn5qe/HonvmZ+vYDRPLDH+SbmZNV/gOWA5aOQmB8ZeZUEuev09/rycn5djEo7EhXrH zQRJWaQuExU6Ea6WXOxChJ7VjotfsXZet3/MTB/KhIpdglbMPXrNrhObajuGzy8THEG1 4amtwJgmqmZtQrvV+3d+MGr563U+fMiXiqcy8n3f+/da4tlUs2n5jyL+FKJ8AZ+hf44T MHkQ== X-Gm-Message-State: ALoCoQnElMaFATbyPEgUJeJcGlbdbq2aVAFn9AnHegW1Inhlwyp5KJZT147j30jWLm1ntq/OUdJa X-Received: by 10.236.101.227 with SMTP id b63mr13083770yhg.37.1391812278814; Fri, 07 Feb 2014 14:31:18 -0800 (PST) Received: from [10.161.254.226] ([201.227.226.138]) by mx.google.com with ESMTPSA id v41sm14754713yhi.19.2014.02.07.14.31.17 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 07 Feb 2014 14:31:18 -0800 (PST) References: Mime-Version: 1.0 (1.0) In-Reply-To: Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Message-Id: <8F489BAD-D11A-42F0-B04E-264AEBF1B5B4@jordanzimmerman.com> Cc: "user@curator.apache.org" X-Mailer: iPhone Mail (11B554a) From: Jordan Zimmerman Subject: Re: Curator Framework Close() is NOT always shutting down ConnectionStateManager threads Date: Fri, 7 Feb 2014 17:31:12 -0500 To: "user@curator.apache.org" X-Virus-Checked: Checked by ClamAV on apache.org Close is supposed to shutdown everything. What version are you using? Can yo= u create a test case that exhibits the problem? =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Jordan Zimmerman > On Feb 7, 2014, at 4:35 PM, Raji Muthupandian wrote: >=20 > Hi Team, > We have 5 server Zookeeper ensemble. If any one of the server goes do= wn, connections are balanced to remaining servers. When the down server come= s back, there will not be any connections to this server unless new clients a= re connected or connectionLoss happens at client side. This makes a unequal c= onnection distribution to the servers. >=20 > To avoid this, we thought of having a connection balancer task which close= s the existing connection and creates new connection on every configured tim= e interval. As part of connection close, we call Curatorframework.close(). > But in some instances , ConnectionStateManager threads are not clo= sed. Threads are still in WAITING state to take events from eventqueue.=20 >=20 > "ConnectionStateManager-0" - Thread t@272 > java.lang.Thread.State: WAITING > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <3be45251> (a java.util.concurrent.locks.Abstrac= tQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObje= ct.await(AbstractQueuedSynchronizer.java:2043) > at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.jav= a:374) > at com.netflix.curator.framework.state.ConnectionStateManager.processE= vents(ConnectionStateManager.java:170) > at com.netflix.curator.framework.state.ConnectionStateManager.access$0= 00(ConnectionStateManager.java:40) > at com.netflix.curator.framework.state.ConnectionStateManager$1.call(C= onnectionStateManager.java:104) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1145) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:615) > at java.lang.Thread.run(Thread.java:724 >=20 > This creates lot of dangling threads over a period of time. Is there a way= to cleanly close the connections ? >=20 > is there any better approach to handle connection balancing.... >=20 > Thanks > Raji