openwhisk-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tyson Norris <>
Subject Re: Improving support for UI driven use cases
Date Sat, 01 Jul 2017 22:35:34 GMT

> On Jul 1, 2017, at 2:07 PM, Alex Glikson <> wrote:
>> a burst of users will quickly exhaust the system, which is only fine for 
> event handling cases, and not fine at all for UI use cases.
> Can you explain why is it fine for event handling cases?
> I would assume that the key criteria would be, for example, around 
> throughput and/or latency (and their tradeoff with capacity), and not 
> necessarily the nature of the application per se.
> Regards,
> Alex

Sure - with event handling, where blocking=false, or where a timeout response of 202 (and
fetch the response later) is tolerable,  exhausting container resources will simply mean that
the latency goes up based on the number of events generated after the point of saturation.
 If you can only process 100 events at one time, an arrival of 1000 events at the same time
means that the second 100 events will only be processed after the first 100 (twice normal
latency), third 100 events after that (3 times normal latency), 4th 100 events after that
(4 times normal latency) etc. But if no user is sitting at a browser waiting for a response,
it is unlikely they care whether the processing occurs 10ms or 10min after the triggering
event. (This is exaggerating, but you get the point)

In the case a user is staring at a browser waiting for response, such a variance in latency
just due to the raw number of users in the system directly relating to the raw number of containers
in the system, will not be usable. Consider concurrency not as a panacea for exhausting container
pool resources, but rather a way to dampen the graph of user traffic increase vs required
container pool increase, making it something like 1000:1 (1000 concurrent users requires 1
container) instead of it being a 1:1 relationship. 

View raw message