openwhisk-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dascalita Dragos <ddrag...@gmail.com>
Subject Re: New ContainerPool
Date Tue, 04 Apr 2017 21:50:01 GMT
This looks very promising Markus ! Great work !


I'm wondering if anyone is currently looking into integrating HTrace and
Zipkin; if there's no on-going effort I'm interested to do this. At least
my team and I are interested in getting a distributed tracing solution in
place, helpful in highlighting areas where performance can be improved. It
should also highlight the improvements brought by this new ContainerPool.

dragos

On Mon, Apr 3, 2017 at 5:04 AM Markus Thömmes <markusthoemmes@me.com> wrote:

> Thanks James,
>
> I will, I already drafted a baseline post :).
>
> Am 03. April 2017 um 13:59 schrieb James Thomas <jthomas.uk@gmail.com>:
>
> This looks fantastic - great work.
> You should definitely write this up as a new blog post!
>
> On 1 April 2017 at 14:05, Markus Thömmes <markusthoemmes@me.com> wrote:
>
> Hi out there,
>
>
> Over the past couple of weeks, I started working on a new ContainerPool
>
> (and thus eventually a new Invoker). It started as a weekend investigation
>
> into how one would write the Invoker if one started on a green field and
>
> turned out a valuable step forward after all.
>
>
> The new ContainerPool is modeled around Akka Actors and I put an emphasis
>
> on the testability of the code. Before diving deeper into performance work
>
> on OpenWhisk we need to be able to quickly verify new models of scheduling
>
> and thus I abstracted the "container providing" code away from the pool
>
> itself. One can now easily simulate different workloads without actually
>
> talking to docker itself. A nice side-effect of this is, that the Container
>
> interface (it's literally a trait) is completely pluggable and can also be
>
> filled in by "insert-your-favorite-container-solution-here".
>
>
> In terms of performance I did see a significant improvement in single-user
>
> throughput performance but haven't yet got around to make a proper write-up
>
> of the experiment's setup and methodology, hence I'll not show hard numbers
>
> for now. We're still missing out on a common load test suite which we can
>
> all use to verify our changes.
>
>
> So all in all features:
>
> - Eliminated concurrency behavior through the actor model
>
> - Eliminated busy looping to get a container
>
> - Increased testability by drawing sharp-abstraction layers and making
>
> sure each component is testable separately
>
> - Increased "experimentability" with scheduling algorithms (they are
>
> encapsulated themselves)
>
> - Performance increases very likely due implementation of a "pause grace"
>
> (the container is not paused for a defined amount of time and can continue
>
> its work immediately if another request comes in at that time)
>
> - Pluggability of the container interface
>
>
> The pull-request is open at https://github.com/
>
> openwhisk/openwhisk/pull/2021. It's passing tests already. Test
>
> coverage is still a bit low, but Sven Lange-Last and I are working to get
>
> it done quite soon.
>
>
> It will be delivered in multiple smallish pull-requests, the first of
>
> which is open here: https://github.com/openwhisk/openwhisk/pull/2092. At
>
> first, the new pool is going to be behind a feature flag. Once the pool has
>
> battle proven, we can then flip the switch and remove old code as
>
> applicable.
>
>
> Feel free to have a play with it, feedback and code-reviews are very very
>
> welcome!
>
>
> Cheers,
>
> Markus
>
>
>
>
>
> --
> Regards,
> James Thomas
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message