openwhisk-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Markus Thömmes <markusthoem...@apache.org>
Subject Re: Kafka and Proposal on a future architecture of OpenWhisk
Date Sun, 19 Aug 2018 11:29:54 GMT
Hi Tyson, Carlos,

FWIW I should change that to no longer say "Kafka" but "buffer" or "message
queue".

I see two use-cases for a queue here:
1. What you two are alluding to: Buffering asynchronous requests because of
a different notion of "latency sensitivity" if the system is in an overload
scenario.
2. As a work-stealing type balancing layer between the ContainerRouters. If
we assume round-robin/least-connected (essentially random) scheduling
between ContainerRouters, we will get load discrepancies between them. To
smoothen those out, a ContainerRouter can put the work on a queue to be
stolen by a Router that actually has space for that work (for example:
Router1 requests a new container, puts the work on the queue while it waits
for that container, Router2 already has a free container and executes the
action by stealing it from the queue). This does has the added complexity
of breaking a streaming communication between User and Container (to
support essentially unbounded payloads). A nasty wrinkle that might render
this design alternative invalid! We could come up with something smarter
here, i.e. only putting a reference to the work on the queue and the
stealer connects to the initial owner directly which then streams the
payload through to the stealer, rather than persisting it somewhere.

It is important to note, that in this design, blocking invokes could
potentially gain the ability to have unbounded entities, where
trigger/non-blocking invokes might need to be subject to a bound here to be
able to support eventual execution efficiently.

Personally, I'm much more torn to the work-stealing type case. It implies a
wholy different notion of using the queue though and doesn't have much to
do with the way we use it today, which might be confusing. It could also
well be the case, that work-stealing type algorithms are easier to back on
a proper MQ vs. trying to make it work on Kafka.

It might also be important to note that those two use-cases might require
different technologies (buffering vs. queue-backend for work-stealing) and
could well be seperated in the design as well. For instance, buffering
triggers fires etc. does not necessarily need to be done on the execution
layer but could instead be pushed to another layer. Having the notion of
"async" vs "sync" in the execution layer could be benefitial for
loadbalancing itself though. Something worth exploring imho.

Sorry for the wall of text, I hope this clarifies things!

Cheers,
Markus

Am Sa., 18. Aug. 2018 um 02:36 Uhr schrieb Carlos Santana <
csantana23@gmail.com>:

> triggers get responded right away (202) with an activation is and then
> sent to the queue to be processed async same as async action invokes.
>
> I think we would keep same contract as today for this type of activations
> that are eventually process different from blocking invokes including we
> Actions were the http client hold a connection waiting for the result back.
>
> - Carlos Santana
> @csantanapr
>
> > On Aug 17, 2018, at 6:14 PM, Tyson Norris <tnorris@adobe.com.INVALID>
> wrote:
> >
> > Hi -
> > Separate thread regarding the proposal: what is considered for routing
> activations as overload and destined for kafka?
> >
> > In general, if kafka is not on the blocking activation path, why would
> it be used at all, if the timeouts and processing expectations of blocking
> and non-blocking are the same?
> >
> > One case I can imagine: triggers + non-blocking invokes, but only in the
> case where those have some different timeout characteristics. e.g. if a
> trigger fires an action, is there any case where the activation should be
> buffered to kafka if it will timeout same as a blocking activation?
> >
> > Sorry if I’m missing something obvious.
> >
> > Thanks
> > Tyson
> >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message