openwhisk-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rodric Rabbah <rod...@gmail.com>
Subject Re: Improving support for UI driven use cases
Date Sun, 02 Jul 2017 08:42:29 GMT
The thoughts I shared around how to realize better packing with intrinsic actions are aligned
with the your goals: getting more compute density with a smaller number of machines. This
is a very worthwhile goal.

I noted earlier that packing more activations into a single container warrants a different
resource manager with its own container life cycle management (e.g., it's almost at the level
of: provision a container for me quickly and let me have it to run my monolithic code for
as long as I want). 

Already some challenges were mentioned, wrt sharing state, resource leaks and possible data
races. Perhaps defining the resource isolation model intra container - processes, threads,
"node vm", ... - is helpful as you refine your proposal. This can address how one might deal
with intra container noisy neighbors as well. 

Hence in terms of resource management as the platform level, I think it would be a mistake
to treat intra container concurrency the same way as ephemeral activations, that are run and
done. Once the architecture and scheduler supports a heterogenous mix of resources, then treating
some actions as intrinsic operations becomes easier to realize; in other words complementary
to the overall proposed direction if the architecture is done right.

To Alex's point, when you're optimizing for latency, you don't need to be constrained to UI
applications. Maybe this is more of a practical motivation based on your workloads.

-r

On Jul 2, 2017, at 2:32 AM, Dascalita Dragos <ddragosd@gmail.com> wrote:

>> I think the opportunities for packing computation at finer granularity
> will be there. In your approach you're tending, it seems, toward taking
> monolithic codes and overlapping their computation. I tend to think this
> will work better with another approach.
> 
> +1 to making the serverless system smarter in managing and running the code
> at scale. I don't think the current state is there right now. There are
> limitations which could be improved by simply allowing developers to
> control which action can be invoked concurrently. We could also consider
> designing the system to "learn" this intent by observing how the action is
> configured by the developer: if it's an HTTP endpoint, or an event handler.
> 
> As long as today we can improve the performance by allowing concurrency in
> actions, and by invoking them faster, why would we not benefit from this
> now, and update the implementation later, once the system improves ? Or are
> there better ways available now to match this performance that are not
> captured in the proposal ?

Mime
View raw message