openwhisk-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matt Rutkowski" <>
Subject Re: How to best run non-local tests in ASF (was: Performance tests for OpenWhisk)
Date Fri, 06 Apr 2018 16:49:34 GMT
Hi Michael,

The model that I am familiar with is from Apache SystemML where Apache 
Infra. could not provide hardware Compute resources (actual GPU) that 
could effectively run the code for testing in a reasonable amount of time. 
 Therefore, IBM worked out a deal with Apache where IBM donated suitable 
Compute resources and provided access to Apache Infra. to manage them.

I reached out this morning to a colleague in my group at IBM, Luciano 
Resende, who worked on negotiating and setting up these testing pipelines 
b/w IBM and Apache (SystemML), and also described similar arrangements for 
Apache Spark and Bahir projects.

Here is what he described to me:
        here are the possibilities for having a heavy Ci infrastructure 
for an Apache Project (assuming Apache CI infrastructure does not provide 
you enough resources)

Self hosted CI infrastructure: This is the scenario that we use in Apache 
Spark, Apache SystemML and some portions of Apache Bahir. A company 
provision machines (in the case of Spark it's AMPLAB and for the others 
it's IBM) and than we configure and manage these machines with the project 
communities, providing public access to build outputs and management 
access per request for committers/pmc members.

Apache managed donated machines: In this scenario, which was a little more 
popular a few years ago, you can procure a set of machines and donate 
these to Apache trough a target donation which in summary means it should 
be used by your project and not shared with the overall projects. In this 
case, the management of the infrastructure is done by Apache, and these 
nodes would be added to their jenkins infrastructure and your jobs 
assigned to run on these machines.

He indicated that if we want to hear more that he can join this thread to 
answer any further or more detailed questions.

Kind regards,

From:   Michael Marth <>
To:     "" <>
Date:   04/06/2018 08:39 AM
Subject:        How to best run non-local tests in ASF (was: Performance 
tests for OpenWhisk)

Hi mentors (and others),

Had an offline discussion this week in which the question came up how an 
ASF project should best go about running 
performance/throughput/scalability tests – i.e. tests that cannot be run 
locally and require a repeatable environment.

Some options:

* interested companies run the tests on their own infra and publish 
results. Pretty lame, especially because typically only that company’s 
engineers can access the env and investigate further.

* interested companies donate cash to sponsor compute resources, 
committers can run and investigate the tests. Ideal from tech perspective, 
but I have no idea how that cash would make its way from the ASF to a 
particular project.

* maybe a middle-ground: interested party that happens to have a public 
cloud offering gives credentials to committers

I am mainly interested to learn if there are other ASF projects (e.g. in 
the Big Data/Hadoop ecosystem) that do something similar. Or if there is 
an ASF-recommended way to do this. Or else, where I could ask this 



From: Michael Marth <>

Date: Wednesday 3 May 2017 20:57

To: "" <>

Subject: Re: Performance tests for OpenWhisk


Quick update: sent the below to users@infra. So far no reaction. The 
archive is here [1] but Bertrand tells me only ASF member have access  - 
for whatever reason.



On Fri, Apr 28, 2017 at 2:23 PM, Michael Marth <<>> wrote:

Dear Infra team,

I am enquiring on behalf of the OpenWhisk project (currently in Incubator)


We would like to periodically run performance tests on a distributed

environment (OpenWhisk typically runs on more than 1 machine). So we are

basically looking for an ability to spin up/tear down a number of 

machines and exclusively use them for a certain amount of time (so that 

VMs are not shared and the performance test results are comparable over


The order of magnitude would be ~5-10 VMs for 1 hour 3 times a week.

I would like to find out if there is an ASF-supported mechanism to do 

For example, can Infra provide such infrastructure? Or is there a cloud

provider (like Azure) that might sponsor such efforts with VMs? Or maybe

there is an established way for commercial companies that are interested 

an ASF project to sponsor (fund) such tests?

If none of the above exists, then it would also be helpful for us to get 

know how other projects run such sort of tests.

Thanks a lot!



From: Markus Thömmes <<>>

Reply-To: "<>" 

Date: Wednesday 26 April 2017 12:59

To: "<>" 

Subject: Re: Performance tests for OpenWhisk

Hi Michael,

yeah that sounds pretty much spot on. I'd like to have at least 2 VMs with 
4+ cores and 8GB memory. One VM would host the management stack while one 
would be dedicated to an Invoker only. That way we could assert 
single-invoker performance the easiest.

Thanks for helping!



Am 26. April 2017 um 11:36 schrieb Michael Marth <<>>:


Does what I describe reflect what you are looking for?

If yes, I am happy to ask on infra.

Let me know


On 26/04/17 07:52, "Bertrand Delacretaz" <<>> wrote:

Hi Michael,

On Tue, Apr 25, 2017 at 6:52 PM, Michael Marth <<>> wrote:

...Maybe our mentors can chime in. Has this been discussed in the ASF 
board or so?...

Best would be to ask the ASF infrastructure team via<> - briefly describe 
what you need to see what's



  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message