openwhisk-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "David P Grove" <gro...@us.ibm.com>
Subject Re: Proposal: Memory Aware Scheduling
Date Thu, 23 Aug 2018 14:31:40 GMT
Awesome!

I'm working on the matching PR for the kube-deploy repo now.

--dave


Christian Bickel <cbickel@apache.org> wrote on 08/23/2018 05:11:56 AM:

> From: Christian Bickel <cbickel@apache.org>
> To: dev@openwhisk.apache.org
> Date: 08/23/2018 05:12 AM
> Subject: Re: Proposal: Memory Aware Scheduling
>
> Hi everyone,
>
> The implementation of this proposal has just been merged with
> INVALID URI REMOVED
>
u=https-3A__github.com_apache_incubator-2Dopenwhisk_commit_5b3e0b6a334b78fc783a2cd655f0f30ea58a68e8&d=DwIFaQ&c=jf_iaSHvJObTbx-

> siA1ZOg&r=Fe4FicGBU_20P2yihxV-
> apaNSFb6BSj6AlkptSF2gMk&m=VOpQ5QsG1GRW56XCvNa-
> az2CYAsttqp6PPkbeIfPBQo&s=wDjvpqtvYQWtwaXze--kHVmWazxfaFUnwk71YNR0wwo&e=.
>
> Greetings
> Christian
> Am Do., 10. Mai 2018 um 13:36 Uhr schrieb Markus Thoemmes
> <markus.thoemmes@de.ibm.com>:
> >
> > Thanks Dominic!
> >
> > Yep, that's exactly the thought.
> >
> > Towards your questions:
> >
> > # 1. How do loadbalancers keep the state:
> >
> > They stay as they are. The Semaphores today have the cpu-share
> based calculated slots and will have the memory based slots in the
> future. No change needed there in my opinion.
> >
> > # 2. How are slots shared among loadbalancers:
> >
> > Same answer as above: Like today! In your example, each
> loadbalancer will have 16 slots to give away (assuming 2
> controllers). This has a wrinkle in that the maximum possible memory
> size must be proportional to the amount of loadbalancers in the
> system. For a first step, this might be fine. In the future we need
> to implement vertical sharding where the loadbalancers divide the
> invoker pool to make bigger memory sizes possible again. Good one!
> >
> > Another wrinkle here is, that with an increasing amount of
> loadbalancers fragmentation gets worse. Again, I think for now this
> is acceptable in that the recommendation on the amount of
> controllers is rather small today.
> >
> > # 3. Throttling mechanism:
> >
> > Very good one, I missed that in my initial proposal: Today, we
> limit the number of concurrent activations, or differently phrased
> the number of slots occupied at any point in time. Likewise, the
> throttling can change to stay "number of slots occupied at any point
> in time" and will effectively limit the amount of memory a user can
> consume in the system, i.e. if a user has 1000 slots free, she can
> have 250x 512MB activations running, or 500x 256MB activations (or
> any mixture of course).
> >
> > It's important that we provide a migration path though as this
> will change the behavior in production systems. We could make the
> throttling strategy configurable and decide between
> "maximumConcurrentActivations", which ignores the weight of an
> action and behaves just like today and "memoryAwareWeights" which is
> the described new way of throttling.
> >
> > Cheers,
> > Markus
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message