Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 085AF2009F3 for ; Sat, 21 May 2016 04:58:31 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 06D36160A2A; Sat, 21 May 2016 02:58:31 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 8221C160A25 for ; Sat, 21 May 2016 04:58:29 +0200 (CEST) Received: (qmail 62048 invoked by uid 500); 21 May 2016 02:58:28 -0000 Mailing-List: contact user-help@curator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@curator.apache.org Delivered-To: mailing list user@curator.apache.org Received: (qmail 62038 invoked by uid 99); 21 May 2016 02:58:28 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 21 May 2016 02:58:28 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 40F3BC2060 for ; Sat, 21 May 2016 02:58:28 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.73 X-Spam-Level: ** X-Spam-Status: No, score=2.73 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=2, KAM_INFOUSMEBIZ=0.75, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=jordanzimmerman-com.20150623.gappssmtp.com Received: from mx2-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id LHqfFm3ajtDf for ; Sat, 21 May 2016 02:58:25 +0000 (UTC) Received: from mail-yw0-f171.google.com (mail-yw0-f171.google.com [209.85.161.171]) by mx2-lw-us.apache.org (ASF Mail Server at mx2-lw-us.apache.org) with ESMTPS id 7458F5F368 for ; Sat, 21 May 2016 02:58:25 +0000 (UTC) Received: by mail-yw0-f171.google.com with SMTP id o16so8585788ywd.2 for ; Fri, 20 May 2016 19:58:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jordanzimmerman-com.20150623.gappssmtp.com; s=20150623; h=from:message-id:mime-version:subject:date:references:to:in-reply-to; bh=VFO3okYnlMto7/aMgzNl5aUoZ+fwOa7vcVCQHmdiVe4=; b=xJASHyjTLVywqg7srvZLAGBumQq/pQD0KgFnQCKIMoNnn6epfFjr9tG0lmjrp1XYHt 8Nq1bB5+cAsGbTbH/hQsOvUugX66gTMs/RY5ZCveIiJS22WMUIrulf8fBU31ldRqd+ya eZTlZieXUTaR9cYKGxuTEsoLZT4kZxQAMMiV2SFp7ukKpiQ4Q5CaMYDHrhyPN1T7TJX0 FI3+Jsv2CNwKPBmRfKdsYIkycLmv11VWbwrHCtEHMtx/SULE0LEggg5Of5Wv8cY7UgDE smZBwygDTMQO5cSQRAtBiBKIOu3ysLA7djsFB5oy45R/kfmoMkZvkj5qZ1FaNY2HYI11 dH6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:message-id:mime-version:subject:date :references:to:in-reply-to; bh=VFO3okYnlMto7/aMgzNl5aUoZ+fwOa7vcVCQHmdiVe4=; b=MX9hC6odiSJ3VTW/YR4nSQTqlvBH+3IqDa4JfLpxc/y94oQY7YHsc0KBQAa0U202x2 cBLkhwdTR3noWDdrGZKlsz3JT1qoaFhQ3ikGeuBze9Ta4l+2YXCTBoMA6vUkKPF5VCoU 5yeAFWwxdZU/jrTaV+hXHwWiOY0ulLS9c+gUw79zCSy6uLrfBzJCnn/AnK7X6Kb/gXym FkwMBrpo/BgyEiMhpDdbPNIEtcZb6vT0HUQNRfiql8mj7isCAnOXeOkJ3AIU3YMfS0S5 4NSnGO9ejLXZfa7dm4JAl0c3rolpcUthOhC+g+G/fAkgH9Zsdfw5v7fUq2/+6j/77tsH muqw== X-Gm-Message-State: AOPr4FVsb/1yNZsGIIS+tPqMkWBrdTL3Z2PEpMYTxn26sPLJuOjsKE0sIFkJV11xVgRoSg== X-Received: by 10.129.152.197 with SMTP id p188mr4314601ywg.92.1463799499602; Fri, 20 May 2016 19:58:19 -0700 (PDT) Received: from [10.0.1.78] ([186.74.13.5]) by smtp.gmail.com with ESMTPSA id v189sm12936532ywa.41.2016.05.20.19.58.17 for (version=TLSv1/SSLv3 cipher=OTHER); Fri, 20 May 2016 19:58:18 -0700 (PDT) From: Jordan Zimmerman Content-Type: multipart/alternative; boundary="Apple-Mail=_F5FC6BE6-A6F2-4766-9642-7327268E1C9B" Message-Id: <27B73FD5-6E7E-413C-AAF2-24742C93E8B7@jordanzimmerman.com> Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) Subject: Re: question about curator - retry policy Date: Fri, 20 May 2016 21:58:16 -0500 References: <52F7A726-784A-4F2C-BF1D-8EA7389EE520@jordanzimmerman.com> <8E59C573-37EE-4DC5-91D1-9A6D12878FE3@jordanzimmerman.com> <3D74D729-CCE5-416D-AFDE-35F588D00586@jordanzimmerman.com> To: user@curator.apache.org In-Reply-To: X-Mailer: Apple Mail (2.3124) archived-at: Sat, 21 May 2016 02:58:31 -0000 --Apple-Mail=_F5FC6BE6-A6F2-4766-9642-7327268E1C9B Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 You don=E2=80=99t need to maintain your own cache. Service Discovery = already handles that. -Jordan > On May 20, 2016, at 5:36 PM, Moshiko Kasirer = wrote: >=20 > We are using nginx as our web tier which delegate requests to app = nodes using consistent hashing to one of the registered app nodes. Since = we have many web and app nodes we have to make sure all available app = nodes are known to the web tier and that in any given time they all see = the same app nodes picture. So we built an app on top of your service = discovery that when app node ris up he register and web tier is = listening to that cluster and changes his available app nodes view.In = adoption we handle situations when there is on connection to zk using a = cache file with latest available view until the connection is restored. = For some reason sometimes although zk is up and running the curator = connection to which we listen to know if we should reregister isn't = invoked meaning stays as LOST... >=20 > =D7=91=D7=AA=D7=90=D7=A8=D7=99=D7=9A 21 =D7=91=D7=9E=D7=90=D7=99 2016 = 01:23,=E2=80=8F "Jordan Zimmerman" > =D7=9B=D7=AA=D7=91: > Retry policy is only used for individual operations. Any client-server = system needs to have retries to avoid temporary network events. The = entire curator-client and curator-framework modules are written to = handle ZooKeeper client connection maintenance. So, there isn=E2=80=99t = one thing I can point to.=20 >=20 > Internally, the ServiceDiscovery code uses a PathChildrenCache = instance. If all you are using is Service Discovery there is almost no = need for you to monitor the connection state. What are you trying to = accomplish? >=20 > -Jordan >=20 >> On May 20, 2016, at 5:19 PM, Moshiko Kasirer > wrote: >>=20 >> The thing is we have many negative tests in which we stop and start = the zk quorum the issue I raised only happens from time to time.... So = it's hat I hard to reproduce. But you just wrote that when the quorom is = up the connection should be reconnected ... how? who does that? ZkClient = or curator? That is not related to retry policy? >>=20 >> =D7=91=D7=AA=D7=90=D7=A8=D7=99=D7=9A 21 =D7=91=D7=9E=D7=90=D7=99 2016 = 01:12,=E2=80=8F "Jordan Zimmerman" > =D7=9B=D7=AA=D7=91: >> If the ZK cluster=E2=80=99s quorum is restored, then the connection = state should change to RECONNECTED. There are copious tests in Curator = itself that show this. If you=E2=80=99re seeing that Curator does not = restore a broken connection then there is a deeper bug. Can you create a = test that shows the problem? >>=20 >> -Jordan >>=20 >>> On May 20, 2016, at 5:07 PM, Moshiko Kasirer > wrote: >>>=20 >>> I mean that while zk cluster is up the curator connection state = stays LOST >>> Which in our case means the app node in which it happens doesnt = register himself as avalable.... I just don't seem to understand when = does curator gives up on trying to connect zk and when he doesn't give = up.=20 >>> Thanks for the help ! >>>=20 >>> =D7=91=D7=AA=D7=90=D7=A8=D7=99=D7=9A 21 =D7=91=D7=9E=D7=90=D7=99 = 2016 00:58,=E2=80=8F "Jordan Zimmerman" > =D7=9B=D7=AA=D7=91: >>> You must have a retry policy so that you don=E2=80=99t overwhelm = your network and ZooKeeper cluster. The example code shows how to create = a reasonable one. >>>> sometimes although zk cluster is up the curator service discovery = connection isn't >>>>=20 >>> Service Discovery=E2=80=99s internal instances might be waiting = based on the retry policy. But, what do you mean by "curator service = discovery connection isn=E2=80=99t=E2=80=9D? There isn=E2=80=99t such a = thing as a service discovery connection.=20 >>>=20 >>> -Jordan >>>=20 >>>> On May 20, 2016, at 4:53 PM, Moshiko Kasirer > wrote: >>>>=20 >>>> We are using your service discovery. So you are saying I should not = care about the retry policy...? So the only thing left to explain is how = come sometimes although zk cluster is up the curator service discovery = connection isn't..... >>>>=20 >>>> =D7=91=D7=AA=D7=90=D7=A8=D7=99=D7=9A 21 =D7=91=D7=9E=D7=90=D7=99 = 2016 00:43,=E2=80=8F "Jordan Zimmerman" > =D7=9B=D7=AA=D7=91: >>>> If you are using Curator=E2=80=99s Service Discovery code, it will = be continuously re-trying the connections. This is not because of the = retry policy it=E2=80=99s because the Service Discovery code manages = connection interruptions internally. >>>>=20 >>>> -Jordan >>>>=20 >>>>> On May 20, 2016, at 4:40 PM, Moshiko Kasirer = > wrote: >>>>>=20 >>>>> Thanks for the replay I will send those logs ASAP. >>>>> It's difficult to understand the connection mechanism of zk .... >>>>> We are using curator 2.10 as our service discovery so we have to = make sure that when zk is alive we connect and inform the our server is = up we do that by listening to curator connection listener which I think = has also to do with the retry policy.... But what I can't understand is = why sometimes we can see in the log that curator gave up (Lost) yet = still a second later curator connection is restored how? Is it because = zk session heartbeat restored the connection? Does that Iovine curator = to change his connection state? And on the other side we sometimes get = to a point were zk is up but curator connection stays as Lost... >>>>> That is why I thought of using the new always try policy you = entered do you think it can help? That why hope there will be no way = that zk is up but curator status is lost.....as once he will retry he = will reconnect to zk.... Is that correct? >>>>>=20 >>>>> =D7=91=D7=AA=D7=90=D7=A8=D7=99=D7=9A 21 =D7=91=D7=9E=D7=90=D7=99 = 2016 00:10,=E2=80=8F "Jordan Zimmerman" > =D7=9B=D7=AA=D7=91: >>>>> Curator=E2=80=99s retry policies are used within each = CuratorFramework operation. For example, when you call = client.setData().forPath(p, b) the retry policy will be invoked if there = is a retry-able exception during the operation. In addition to the = retryPolicy, there are connection timeouts. The behavior of how this is = handled changed between Curator 2.x and Curator 3.x. In Curator 2.x, for = every iteration of the retry, the operation will wait until connection = timeout when there=E2=80=99s no connection. In Curator 3.x, the = connection timeout wait only occurs once (if the default = ConnectionHandlingPolicy is used). >>>>>=20 >>>>> In any event, ZooKeeper itself tries to maintain the connection. = Also, Curator will re-create the internally managed connection depending = various network interruptions, etc. I=E2=80=99d need to see the logs to = give you more input.=20 >>>>>=20 >>>>> -Jordan >>>>>=20 >>>>>> On May 19, 2016, at 10:12 AM, Moshiko Kasirer = > wrote: >>>>>>=20 >>>>>> first i would like to thank you about curator we are using it as = part of our service discovery=20 >>>>>>=20 >>>>>> solution and it helps a lot!!=20 >>>>>>=20 >>>>>> i have a question i hope you will be able to help me with.=20 >>>>>>=20 >>>>>> its regarding the curator retry policy it seems to me we dont = really understand when this policy is=20 >>>>>>=20 >>>>>> invoked, as i see in our logs that although i configured it as = max retry 1 actually in the logs i see=20 >>>>>>=20 >>>>>> many ZK re connection attempts (and many curator gave up messages = but later i see=20 >>>>>>=20 >>>>>> reconnected status...) . is it possible that that policy is only = relevant to manually invoked=20 >>>>>>=20 >>>>>> operations against the ZK cluster done via curator ? and that the = re connections i see in the logs=20 >>>>>>=20 >>>>>> are caused by the fact that the ZK was available during start up = so sessions were created and=20 >>>>>>=20 >>>>>> then when ZK was down the ZK clients (not curator) are sending = heartbeats as part of the ZK=20 >>>>>>=20 >>>>>> architecture? that is the part i am failing to understand and i = hope you can help me with that. >>>>>>=20 >>>>>> you have recently added RetreyAllways policy and i wanted to know = if it is save to use it?=20 >>>>>>=20 >>>>>> the thing is we always want to retry to reconnect to ZK when he = is available but that is something=20 >>>>>>=20 >>>>>> the ZK client does as long as he has open sessions right? i am = not sure that it has to do with the=20 >>>>>>=20 >>>>>> retry policy ...=20 >>>>>>=20 >>>>>> thanks, >>>>>>=20 >>>>>> moshiko >>>>>>=20 >>>>>> --=20 >>>>>>=20 >>>>>> Moshiko Kasirer >>>>>> Software Engineer >>>>>> T: +972-74-700-4357 >>>>>> = = We Create Meaningful Connections >>>>>> >>>>>>=20 >>>>>> This message may contain confidential and/or privileged = information.=20 >>>>>> If you are not the addressee or authorized to receive this on = behalf of the addressee you must not use, copy, disclose or take action = based on this message or any information herein.=20 >>>>>> If you have received this message in error, please advise the = sender immediately by reply email and delete this message. Thank you. >>>>>=20 >>>>>=20 >>>>> This message may contain confidential and/or privileged = information.=20 >>>>> If you are not the addressee or authorized to receive this on = behalf of the addressee you must not use, copy, disclose or take action = based on this message or any information herein.=20 >>>>> If you have received this message in error, please advise the = sender immediately by reply email and delete this message. Thank you. >>>>=20 >>>>=20 >>>> This message may contain confidential and/or privileged = information.=20 >>>> If you are not the addressee or authorized to receive this on = behalf of the addressee you must not use, copy, disclose or take action = based on this message or any information herein.=20 >>>> If you have received this message in error, please advise the = sender immediately by reply email and delete this message. Thank you. >>>=20 >>>=20 >>> This message may contain confidential and/or privileged information.=20= >>> If you are not the addressee or authorized to receive this on behalf = of the addressee you must not use, copy, disclose or take action based = on this message or any information herein.=20 >>> If you have received this message in error, please advise the sender = immediately by reply email and delete this message. Thank you. >>=20 >>=20 >> This message may contain confidential and/or privileged information.=20= >> If you are not the addressee or authorized to receive this on behalf = of the addressee you must not use, copy, disclose or take action based = on this message or any information herein.=20 >> If you have received this message in error, please advise the sender = immediately by reply email and delete this message. Thank you. >=20 >=20 > This message may contain confidential and/or privileged information.=20= > If you are not the addressee or authorized to receive this on behalf = of the addressee you must not use, copy, disclose or take action based = on this message or any information herein.=20 > If you have received this message in error, please advise the sender = immediately by reply email and delete this message. Thank you. --Apple-Mail=_F5FC6BE6-A6F2-4766-9642-7327268E1C9B Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 You don=E2=80=99t need to maintain your own cache. Service = Discovery already handles that.

-Jordan

On May 20, 2016, at 5:36 PM, = Moshiko Kasirer <moshek@liveperson.com> wrote:

We are using nginx as our web tier = which delegate requests to app nodes using consistent hashing to one of = the registered app nodes. Since we have many web and app nodes we have = to make sure all available app nodes are known to the web tier and that = in any given time they all see the same app nodes picture. So we built = an app on top of your service discovery that when app node ris up he = register and web tier is listening to that cluster and changes his = available app nodes view.In adoption we handle situations when there is = on connection to zk using a cache file with latest available view until = the connection is restored. For some reason sometimes although zk is up = and running the curator connection to which we listen to know if we = should reregister isn't invoked meaning stays as LOST...

=D7=91=D7=AA=D7=90=D7=A8=D7=99=D7=9A 21 = =D7=91=D7=9E=D7=90=D7=99 2016 01:23,=E2=80=8F "Jordan Zimmerman" <jordan@jordanzimmerman.com> =D7=9B=D7=AA=D7=91:
Retry policy is only used for individual = operations. Any client-server system needs to have retries to avoid = temporary network events. The entire curator-client and = curator-framework modules are written to handle ZooKeeper client = connection maintenance. So, there isn=E2=80=99t one thing I can point = to. 

Internally, the ServiceDiscovery code uses a = PathChildrenCache instance. If all you are using is Service Discovery = there is almost no need for you to monitor the connection state. What = are you trying to accomplish?

-Jordan

On May = 20, 2016, at 5:19 PM, Moshiko Kasirer <moshek@liveperson.com> wrote:

The thing = is we have many negative tests in which we stop and start the zk quorum = the issue I raised only happens from time to time.... So it's hat I hard = to reproduce. But you just wrote that when the quorom is up the = connection should be reconnected ... how? who does that? ZkClient  = or curator? That is not related to retry policy?

=D7=91=D7=AA=D7=90=D7=A8=D7=99=D7=9A 21 = =D7=91=D7=9E=D7=90=D7=99 2016 01:12,=E2=80=8F "Jordan Zimmerman" <jordan@jordanzimmerman.com> =D7=9B=D7=AA=D7=91:
If the ZK cluster=E2=80=99s quorum is = restored, then the connection state should change to RECONNECTED. There = are copious tests in Curator itself that show this. If you=E2=80=99re = seeing that Curator does not restore a broken connection then there is a = deeper bug. Can you create a test that shows the problem?

-Jordan

On May 20, 2016, at 5:07 PM, Moshiko Kasirer <moshek@liveperson.com> wrote:

I mean that = while zk cluster is up the curator connection state stays LOST
Which in our case means the app node in which it happens doesnt register = himself as avalable.... I just don't seem to understand when does = curator gives up on trying to connect zk and when he doesn't give up. =
Thanks for the help !

=D7=91=D7=AA=D7=90=D7=A8=D7=99=D7=9A 21 = =D7=91=D7=9E=D7=90=D7=99 2016 00:58,=E2=80=8F "Jordan Zimmerman" <jordan@jordanzimmerman.com> =D7=9B=D7=AA=D7=91:
You must have a retry policy so that you = don=E2=80=99t overwhelm your network and ZooKeeper cluster. The example = code shows how to create a reasonable one.

sometimes = although zk cluster is up the curator service discovery connection = isn't

Service Discovery=E2=80=99s internal = instances might be waiting based on the retry policy. But, what do you = mean by "curator service discovery connection isn=E2=80=99t=E2=80=9D? = There isn=E2=80=99t such a thing as a service discovery = connection. 

-Jordan

On May 20, 2016, at 4:53 PM, = Moshiko Kasirer <moshek@liveperson.com> = wrote:

We are using your service discovery. So you are = saying I should not care about the retry policy...? So the only thing = left to explain is how come sometimes although zk cluster is up the = curator service discovery connection isn't.....

=D7=91=D7=AA=D7=90=D7=A8=D7=99=D7=9A 21 = =D7=91=D7=9E=D7=90=D7=99 2016 00:43,=E2=80=8F "Jordan Zimmerman" <jordan@jordanzimmerman.com> =D7=9B=D7=AA=D7=91:
If you are = using Curator=E2=80=99s Service Discovery code, it will be continuously = re-trying the connections. This is not because of the retry policy = it=E2=80=99s because the Service Discovery code manages connection = interruptions internally.

-Jordan

On May 20, 2016, at 4:40 PM, Moshiko Kasirer = <moshek@liveperson.com> wrote:

Thanks for = the replay I will send those logs ASAP.
It's difficult to understand the connection mechanism of zk ....
We are using curator 2.10 as our service discovery so we have to make = sure that when zk is alive we connect and inform the our server is up we = do that by listening to curator connection listener which I think has = also to do with the retry policy.... But what I can't understand is why = sometimes we can see in the log that curator gave up (Lost) yet still a = second later curator connection is restored how? Is it because zk = session heartbeat restored the connection? Does that Iovine curator to = change his connection state? And on the other side we sometimes get to a = point were zk is up but curator connection stays as Lost...
= That is why I thought of using the new always try policy you entered do = you think it can help? That why  hope there will be no way that zk = is up but curator status is lost.....as once he will retry he will = reconnect to zk.... Is that correct?

=D7=91=D7=AA=D7=90=D7=A8=D7=99=D7=9A 21 = =D7=91=D7=9E=D7=90=D7=99 2016 00:10,=E2=80=8F "Jordan Zimmerman" <jordan@jordanzimmerman.com> =D7=9B=D7=AA=D7=91:
Curator=E2=80=99s retry policies are used = within each CuratorFramework operation. For example, when you call = client.setData().forPath(p, b) the retry policy will be invoked if there = is a retry-able exception during the operation. In addition to the = retryPolicy, there are connection timeouts. The behavior of how this is = handled changed between Curator 2.x and Curator 3.x. In Curator 2.x, for = every iteration of the retry, the operation will wait until connection = timeout when there=E2=80=99s no connection. In Curator 3.x, the = connection timeout wait only occurs once (if the default = ConnectionHandlingPolicy is used).

In any event, ZooKeeper itself tries to = maintain the connection. Also, Curator will re-create the internally = managed connection depending various network interruptions, etc. I=E2=80=99= d need to see the logs to give you more input. 

-Jordan

On May 19, 2016, at 10:12 AM, Moshiko Kasirer <moshek@liveperson.com> wrote:

first i would like to thank you about curator we are using it = as part of our service discovery 

solution and it helps a = lot!! 

i have a = question i hope you will be able to help me with. 

its regarding the curator retry = policy it seems to me we dont really understand when this policy = is 

invoked, =  as i see in our logs that although i configured it as max retry 1 = actually in the logs i see 

many ZK re connection attempts (and many curator gave up = messages but later i see 

reconnected status...) . is it possible that that policy is = only relevant to manually invoked 

operations against the ZK cluster = done via curator ? and that the re connections i see in the = logs 

are caused = by the fact that the ZK was available during start up so sessions were = created and 

then when ZK = was down the ZK clients (not = curator)  are sending heartbeats as part of the = ZK 

architecture? = that is the part i am failing to understand and i hope you can help me = with that.

you have = recently added RetreyAllways policy and i wanted to know if it is save = to use it? 

the thing is = we always want to retry to reconnect to ZK when he is available but that = is something 

the ZK = client does as long as he has open sessions right?  i am not sure = that it has to do with the 

retry policy ... 

thanks,

moshiko

--
=20
Moshiko Kasirer
Software Engineer
T: +972-74-700-4357
We Create Meaningful Connections
=20

This message may contain = confidential and/or privileged information. 
If you are not the addressee or = authorized to receive this on behalf of the addressee you must not use, = copy, disclose or take action based on this message or any information = herein. 
If = you have received this message in error, please advise the sender = immediately by reply email and delete this message. Thank = you.


This message may contain = confidential and/or privileged information. 
If you are not the addressee or = authorized to receive this on behalf of the addressee you must not use, = copy, disclose or take action based on this message or any information = herein. 
If = you have received this message in error, please advise the sender = immediately by reply email and delete this message. Thank = you.


This message may contain = confidential and/or privileged information. 
If you are not the addressee or = authorized to receive this on behalf of the addressee you must not use, = copy, disclose or take action based on this message or any information = herein. 
If = you have received this message in error, please advise the sender = immediately by reply email and delete this message. Thank = you.


This message may contain = confidential and/or privileged information. 
If you are not the addressee or = authorized to receive this on behalf of the addressee you must not use, = copy, disclose or take action based on this message or any information = herein. 
If = you have received this message in error, please advise the sender = immediately by reply email and delete this message. Thank = you.


This message may contain = confidential and/or privileged information. 
If you are not the addressee or = authorized to receive this on behalf of the addressee you must not use, = copy, disclose or take action based on this message or any information = herein. 
If = you have received this message in error, please advise the sender = immediately by reply email and delete this message. Thank = you.


This message may contain = confidential and/or privileged information. 
If you are not the addressee or = authorized to receive this on behalf of the addressee you must not use, = copy, disclose or take action based on this message or any information = herein. 
If = you have received this message in error, please advise the sender = immediately by reply email and delete this message. Thank = you.

= --Apple-Mail=_F5FC6BE6-A6F2-4766-9642-7327268E1C9B--