zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Camille Fournier <cami...@apache.org>
Subject Re: High availability backend services via zookeeper or TCP load balancer
Date Tue, 26 Feb 2013 18:35:10 GMT
You can definitely use ZK for this, as Jordan said. I would really question
whether writing client-side code to do this vs using something that is
really designed for writing load balancers (like haproxy) wouldn't be a
better way to do it however. It doesn't sound like you are creating
long-lived connections between these clients and services, and instead just
want to send a request to an ip address that corresponds to the LB for that
request. Your client-side code is probably going to be buggier and the
setup/maintenance more complex than if you use a simple load balancer. If
you're already using ZK for a lot of other things and it is really baked in
to all your clients, maybe this is the easiest thing to do, but I wouldn't
use ZK just for this purpose.


On Tue, Feb 26, 2013 at 1:27 PM, Jordan Zimmerman <
jordan@jordanzimmerman.com> wrote:

> Service Discovery is a good use-case for ZooKeeper. FYI - Curator has an
> implementation of this already:
>         https://github.com/Netflix/curator/wiki/Service-Discovery
> -Jordan
> On Feb 26, 2013, at 9:36 AM, howard chen <howachen@gmail.com> wrote:
> > Hi, I am new to ZK and pls forgive me my question below is stupid :)
> >
> > We have custom written servers (not public facing, only called by our
> > internal system) which is distributed (TCP based, share nothing) that is
> > currently in AWS and with the help of ELB TCP based load balancing, it is
> > somehow fault-tolerant and we are happy with that.
> >
> > Now, we need to move off from AWS to save cost as our traffic grow.
> >
> > The problem is, now we need to maintain our own load balancers and we
> need
> > to make it fault-tolerant (unlike ELB is built-in), the
> > expected technologies would be haproxy, keepalived.
> >
> > While I am thinking this setup, I am thinking why not use ZK instead? Why
> > not maintain the currently available servers list in ZK, my initial
> > algorithms for the internal clients would be:
> >
> > 1. Get the latest server list from ZK
> > 2. Hash the server list and pick one of the backend (load balancing part)
> > 3. Call it
> > 4. If it fail, update the ZK and increment the error count
> > 5. If the error count reached a threshold and remove the backend from the
> > server list
> > 6. So the other clients would not see the backend with error
> > 7. Flush the error count so the backend would have a chance to active
> again
> >
> > Is my algorithm above valid? Any caveat when using with ZK?
> >
> > Looking for your comment, thanks.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message