openwhisk-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tyson Norris <tnor...@adobe.com.INVALID>
Subject Re: Improving support for UI driven use cases
Date Sun, 02 Jul 2017 14:42:24 GMT

> On Jul 2, 2017, at 3:05 AM, Markus Thömmes <markusthoemmes@me.com> wrote:
> 
> Right, I think the UI workflows are just an example of apps that are latency sensitive
in general.
> 
> I had a discussion with Stephen Fink on the matter of detecting ourselves that an action
is latency sensitive by using the blocking parameter or as mentioned the user's configuration
in terms of web-action vs. non-web action. The conclusion there was, that we probably cannot
reliably detect latency sensitivity without asking the user to do so. Having such an option
has implications on other aspects of the platform: Why would one not choose that option?
> 

Because a) your use case is event driven and the client trigger simply doesn’t care about
the response or b) you want a guarantee that the activation will be processed even if the
client stops listening for the response (e.g. they received a 202 instead of 200 after a timeout)

> To Rodric's points I think there are two topics to speak about and discuss:
> 
> 1. The programming model: The current model encourages users to break their actions apart
in "functions" that take payload and return payload. Having a deployment model outlined could
as noted encourage users to use OpenWhisk as a way to rapidly deploy/undeploy their usual
webserver based applications. The current model is nice in that it solves a lot of problems
for the customer in terms of scalability and "crash safeness".
> 

But if you require use of the programming model to always achieve scalability, you prevent
use of libraries that may not be ported to that programming model. Consider a npm module that
is used to wrap twitter API calls. I use that in my action to produce tweets. Is my only option
for making my action scale (better than 1 user : 1 container) to reproduce the npm module
in terms of OpenWhisk functions for each HTTP call and compute operation?  


> 2. Raw throughput of our deployment model: Setting the concerns aside I think it is valid
to explore concurrent invocations of actions on the same container. This does not necessarily
mean that users start to deploy monolithic apps as noted above, but it certainly could. Keeping
our JSON-in/JSON-out at least for now though, could encourage users to continue to think in
functions. Having a toggle per action which is disabled by default might be a good way to
start here, since many users might need to change action code to support that notion and for
some applications it might not be valid at all. I think it was also already noted, that this
imposes some of the "old-fashioned" problems on the user, like: How many concurrent requests
will my action be able to handle? That kinda defeats the seemless-scalability point of serverless.

I’m not suggesting changing any programming model, only that the programming model stops
at the point that I depend on libraries for anything, so relying on the programming model
to achieve throughput scalability will not be practical in many cases. I pointed out that
both: the problems are old fashioned yes, and that concurrency is (still) a reasonable way
to address them, Also that doing so it is not defeating any scalability provisions of a serverless
mantra: additional containers can still be started per action, just not *1 per concurrent
user*. You still need to provide some estimate of resource usage of your action. The only
difference is that your approach to determining that estimate changes. e.g. if I can estimate
that my action operates well at 100 rps with 500 concurrent users, and worse with more concurrent
users, then I can configure the system to start more containers once 500 concurrent activations
is hit, and stop those containers when it decreases below 500. 

How do you today estimate resource requirements of actions that are single-user workflows?
Maybe that is something we can discuss to clarify how it would be done in a concurrent activation
model. 

Thanks
Tyson
> 

Mime
View raw message