openwhisk-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Markus Thoemmes" <markus.thoem...@de.ibm.com>
Subject Re: Proposal on a future architecture of OpenWhisk
Date Wed, 18 Jul 2018 12:41:24 GMT
Hi Martin,

thanks for the great questions :)

>thinking about scalability and the edge case. When there are not
>enough 
>containers and new controllers are being created, and all of them 
>redirect traffic to the controllers with containers, doesn't it mean 
>overloading the available containers a lot? I'm curious how we
>throttle the traffic in this case.

True, the first few requests will overload the controller that owns the very first container.
That one will request new containers immediately, which will then be distributed to all existing
Controllers by the ContainerManager. An interesting wrinkle here is, that you'd want the overloading
requests to be completed by the Controllers that sent it to the "single-owning-Controller".
What we could do here is:

Controller0 owns ContainerA1
Controller1 relays requests for A to Controller0
Controller0 has more requests than it can handle, so it requests additional containers. All
requests coming from Controller1 will be completed with a predefined message (for example
"HTTP 503 overloaded" with a specific header say "X-Return-To-Sender-By: Controller0")
Controller1 recognizes this as "okay, I'll wait for containers to appear", which will eventually
happen (because Controller0 has already requested them) so it can route and complete those
requests on its own.
Controller1 will now no longer relay requests to Controller0 but will request containers itself
(acknowledging that Controller0 is already overloaded).

>
>I guess the other approach would be to block creating new controllers
>when there are no containers available as long as we don't want to 
>overload the existing containers. And keep the overflowing workload
>in Kafka as well.

Right, the second possibility is to use a pub/sub (not necessarily Kafka) queue between Controllers.
Controller0 subscribes to a topic for action A because it owns a container for it. Controller1
doesn't own a container (yet) and publishes a message as overflow to topic A. The wrinkle
in this case is, that Controller0 can't complete the request but needs to send it back to
Controller1 (where the HTTP connection is opened from the client).

Does that make sense?

Cheers,
Markus


Mime
View raw message