zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yue Shen <shyue2...@gmail.com>
Subject Re: How to scale ZooKeeper to support 10K concurrent connections?
Date Fri, 27 Sep 2019 19:07:11 GMT
Thank you, Jorn.

We don't use Solr. We inherited this architecture from another team, and we
don't have time to redesign a new system to scale in 2 months.

As you said, if I were to design it, I would definitely put a queue in
front of Lambda service, our new design is actually on the way with Kafka
upfront. However we need to scale it out with the coming holiday
season before we can roll out the new system, which is just kicked off a
couple of weeks ago.

At this point, we want to tune ZooKeeper so it can handle 10K concurrent
calls. Any suggestions?

Thank you,

On Fri, Sep 27, 2019 at 10:39 AM Jörn Franke <jornfranke@gmail.com> wrote:

> Put the Solr request on a SQS queue using your 10k instances and have 10
> or so worker working on the queue to put it in Solr. Having 10k connections
> just because lambda creates that many instances does not make sense for no
> database service.
> > Am 27.09.2019 um 19:01 schrieb Yue Shen <shyue2010@gmail.com>:
> >
> > Dear ZooKeeper users,
> >
> > I have a special use case, in which I use AWS lambda service.
> >
> > Inside the lambda service logic, it goes to ZooKeeper to check the worker
> > for the data, if exists,  connect to the worker endpoint and send the
> data.
> > If the worker isn't assigned, the logic will post a new assignment, and
> > wait for it to be assigned to a worker. There is a coordinator to watch
> the
> > new assignment and assign tasks.
> >
> > My problem comes with AWS Lambda service, which can launch tens of
> > thousands of calls. When this happens, I found many calls get timeout.
> The
> > active connections to ZooKeeper plateau around 6500.
> >
> > BTW, I run ZooKeeper as 3 node ensemble, run on Quorum.
> >
> > How can I scale ZooKeeper to support more concurrent connections?
> >
> > Thank you,
> > Yue

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message