openwhisk-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Carlos Santana <csantan...@gmail.com>
Subject Re: The ActionLoop based runtime for Python3.6 for OpenWhisk is 5 times faster than the current runtime
Date Sun, 18 Nov 2018 13:09:30 GMT
TLDR +1


I’m glad performance tests to compare the improvements with numbers

This type of runtime improvement with a benchmark approach I really love to
see, this allows to not just measure how a new version of a runtime is but
having a pipeline that can catch performance regressions.

Yes agree 100% this new approach is the way forward to improve
extensibility and performance for a majority of the languages (swift, go,
rust, Haskell, bash, Perl) which there is no cost in latency because
loading a language runtime (JIT or JVM)
- using this go executable web server proxy over python flask
- using stdin/stdout loop for the target that doesn’t exit.
- potential more efficient to manage multiconcurrency processes to build on
top of recent work from Tyson
- efficiency of having a single proxy binary
- simplicity of having a meta compiler driven with descriptor runnable with
a single docker run

As far what names, it doesn’t matter much to me, what end users will see
doesn’t change much from what they see today, for example for golang they
only see “— kind go:1.11”

For end users using dockerskeleton they don’t see that name they see
“native”, “wsk action update myexecutable my.zip —native”
With this loop approach their executable or bash or perk script would need
to change to loop over stdin reading new JSON lines.
So maybe we have —native-loop or -loop or or —ow —native:2
Or maybe something drastic no flag just having the artifact having .zip
means is a loop exec zip
That gets process by the action-loop docker image runtime

For single source file we continue with the —kind and using a language
specific runtime to be able to compile on the fly. But using single source
should be only for demo/prototypes as this introduce higher latency for
cold starts for certain types of runtimes like swift, go, rust, c++, etc...


— Carlos
PS: I will be reviewing and building on top of Michelles PR for swift 4.2
using this new approach and doing a performance comparison

On Sun, Nov 18, 2018 at 7:28 AM Michele Sciabarra <michele@sciabarra.com>
wrote:

> Hello all, before commenting, here there are the numbers:
>
> *** Testing OpenWhisk Classic Python ***
> Running 1m test @ http://localhost:8080/run
>   1 threads and 1 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    49.91ms    2.33ms  60.23ms   79.28%
>     Req/Sec    19.87      2.58    30.00     93.59%
>   1202 requests in 1.00m, 148.01KB read
> Requests/sec:     20.03
> Transfer/sec:      2.47KB
>
> *** Testing ActionLoop Python ***
> Running 1m test @ http://localhost:8080/run
>   1 threads and 1 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    11.77ms  673.30us  37.85ms   93.85%
>     Req/Sec    85.14      5.56    90.00     84.25%
>   5101 requests in 1.00m, 632.64KB read
> Requests/sec:     84.95
> Transfer/sec:     10.54KB
>
> What is this?
>
> For those who are new to the list, I recap a bit.  I am the author of the
> runtime for Go.
> Because Go is a compiled language, I implemented a different approach than
> the other languages.
> Other runtimes are build in the target language, they implement the
> runtime an and load the code in a way "language dependent"
> In the  Go runtime, your action is a full featured standalone executable,
> and it communicates with the proxy via pipes and I/O.
>
> The runtime evolved in a compete infrastructure to implement actions in
> ANY programming language that can read input and write output line by line
> and parse JSON. It was designed for compiled programming language (Go, but
> also C/C++, Rust, Haskell, Nim) but there is nothing preventing to use it
> with scripted languages: Python for example.
>
> Building a runtime using the current infrastructure, is extremely easy:
> you just need a Dockerfile, a "compiler" and a launcher. I already built
> the support for Go, Swift, an experiment using Scala and now I created one
> for Python. It took me 2 hours today to build the runtime. Working a bit on
> it. it could even became easier. There is now a "compiler" script, it could
> be just a "json" descriptor...
>
> Then I decided to benchmark the result. I created a very simple "main.py"
> (just the classic "hello") and I used the "wrk" tool to benchmark the "raw"
> http performance, executing one single thread with one connection. The
> result is what you can see before. The current runtime can perform, using
> Docker on my Mac, 1202 requests in one minute, while the runtime built with
> ActionLoop can perform 5101 requests in one minute.
>
> I am not entirely surprised because the current runtime uses a Python
> based HTTP support, while the ActionLoop is entirely native code and it
> communicate the action over (internally optimized) I/O.
>
> Code is here:
> https://github.com/sciabarracom/incubator-openwhisk-runtime-actionloop
>
> So, is it worth to use this as an "official" way to write runtimes? In
> such a case I can document all the procedure... There was some discussion
> of using it as the "dockerskeleton v2". Not sure if the name is suitable,
> also the concept is a bit different although it is very generic. I would
> like more something like "runtime-sdk" or something like this. Let me know
> your thoughts.
>
> --
>   Michele Sciabarra
>   michele@sciabarra.com
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message