mesos-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Qian Zhang <zhq527...@gmail.com>
Subject Re: Please review design doc for task resizing
Date Thu, 10 Dec 2015 14:55:54 GMT
Since we all agree option 2 is the best option for scheduler API's change,
I have updated design doc by marking it as the first option which means it
is the final decision.

I see. However, that operation is not idempotent. Imagine you issue a
> resize request and for some reason, the request takes long to carry out and
> you don't have a way to guarantee that the request was received (for
> example, during a master failover). In the mean time, you issue another
> resize. When both land, it may not be the action you wanted.
> containerizer->update() applies the aggregate size anyway, so you need to
> keep track of the 'sign' of the resize all the way down to the slave
> process.
>

Yes, I understand the operation in current design is not idempotent. But I
think when a master failover, framework will do reconciliation with master
so that it can know the latest resources used by its task, and then it can
decide to issue another resize operation or not.

> And I have 2 more questions that I want to discuss with you:
> > 1. David G raised a user story about framework should be able to resize
> its
> > executor, I think this should be a valid use case, but I would suggest us
> > to focus on task resizing in MVP and handle executor resizing in the
> > post-MVP, how do you think?
> > 2. Do you think we need to involve executor in task resizing? E.g., let
> > slave send a message (e.g., RunTaskMessage) to executor so that executor
> > can do the actual resizing? The reason I raise this question is that I
> > think in some cases, executor needs to be aware of the resized resources,
> > e.g., framework adds a new port to a task, I think executor & task should
> > know such new port so that the task can start to use it. And in the
> > Kubernetes on Mesos case, user may want to resize a pod which is actually
> > created an managed by k8sm-executor, so it should be involved to resize
> the
> > resources of the pod.
> >
>
> Maybe we can do that down the line; as an MVP, maybe we can skip it but
> have a model that supports it?
> Using the task info as a 'desired state', changing the executor info
> resources could be used to change it's size. However, there are some
> details in terms of master failover and slave reregistration where executor
> infos are sent from the slaves, where we need to be careful.
>

So you mean for executor resize, we do not need to implement it in MVP, but
need to cover it in the design doc so that we will know how we are going to
implement it in post-MVP, right? I am not sure what you mean about "Using
the task info as a 'desired state'", I think we will not leverage or change
TaskInfo in this project, so can you please elaborate it?

And any comments for my second question above? Do you think we need to
involve executor in task resizing?

> Currently I do not have PoC implementation for my proposal yet, do you
> > recommend that we should have it now? Or after the design is close to be
> > finalized or at least after we make the decision among those 3 options
> > about scheduler API changes in the design doc?
> >
>
> Doesn't hurt to experiment and see if there are obvious things that we
> missed to address.
> If you haven't done any work yet, I'd maybe defer until we at least have
> the placement of the 'resize operation' nailed down.
>

OK, so you prefer we start to do PoC implementation after we finalize the
design of resize operation in scheduler API, right? I think it should be
clear now since we all agree option 2 is the best.


> > I'd like to have an online sync up with you, can you please let me know
> > when you will be online in IRC usually? Or you prefer other ways to sync
> > up? I will try to catch you :-)
> >
>
> Let's do a joint call; how about Friday or Monday?
> I am available in business hours PST.


Sure, what about 4:00 pm this Friday PST? And you prefer IRC, Skype call?
Or other ways? :-)


Regards,
Qian

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message