mesos-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Petr Novak" <oss.mli...@gmail.com>
Subject RE: Can I consider other framework tasks as a resource? Does it make sense?
Date Wed, 14 Dec 2016 18:33:27 GMT
Thanks a lot for the input.

 

“Y scheduler can accept a rule how to check readiness on startup”

 

Based on it seems like +1 that I can consider it as a responsibility of a scheduler.

 

Cheers,

Petr

 

 

From: Alex Rukletsov [mailto:alex@mesosphere.com] 
Sent: 14. prosince 2016 13:01
To: user
Subject: Re: Can I consider other framework tasks as a resource? Does it make sense?

 

Task dependency is probably too vague to discuss specifically. Mesos currently does not explicitly
support arbitrary task dependencies. You mentioned colocation, one type of dependency, so
let's look at it.

 

If I understood you correctly, you would like to colocate a task from framework B to the same
node where a task from framework A is running. The first problem is to get a list of such
nodes (and keep them updated, because task may crash, migrate and so on). This can be done,
say, by using Mesos DNS or alike. The second problem is to ensure that framework gets enough
resources from that nodes. A possible solution here is to put both frameworks A and B into
the same role and use dynamic reservations to ensure enough resources are laid away for both
tasks. Disadvantages: you should know about all dependencies upfront, frameworks should be
in the same role.

 

Now the question is, why would you need to colocate workloads? I would say this is something
you should avoid if possible, like any extra constraint that complicate the system. Probably
the only 100% legitimate use case for colocation is data locality. Solving this particular
problem seems easier than to address arbitrary task dependencies.

 

If all you try to achieve is making sure a specific service represented by a framework X is
running and ready in the cluster, you can do that by running specific checks before starting
a depending framework Y or launching a new task in this framework. If your question is about
whether Y should know about X and know how to check readiness of X in the cluster, I'd say
you'd better keep that abstracted: Y scheduler can accept a rule how to check readiness on
startup.

 

On Wed, Dec 14, 2016 at 5:14 AM, haosdent <haosdent@gmail.com> wrote:

Hi, @Petr.

 

> Like if I want to run my task collocated with some other tasks on the same node I have
to make this decision somewhere.

Do you mean "POD" here?

 

For my cases, if there are some dependencies between my tasks, I use database, message queue
or zookeeper to implement my requirement. 

 

On Wed, Dec 14, 2016 at 3:09 AM, Petr Novak <oss.mlists@gmail.com> wrote:

Hello,

I want to execute tasks which requires some other tasks from other framework(s) already running.
I’m thinking where such logic/strategy/policy belongs in principle. I understand scheduling
as a process to decide where to execute task according to some resources availability, typically
CPU, mem, net, hdd etc.

 

If my task require other tasks running could I generalize and consider that those tasks from
other frameworks are kind of required resources and put this logic/strategy decisions into
scheduler? Like if I want to run my task collocated with some other tasks on the same node
I have to make this decision somewhere.

 

Does it make any sense? I’m asking because I have never thought about other frameworks/tasks
as “resources” so that I could put them into scheduler to satisfy my understanding of
a scheduler. Or it rather belongs higher like to a framework, or lower to an executor? Should
scheduler be dedicated to decisions about resources which are offered and am I mixing concepts?

 

Or I just should keep distinction between resources and requirements/policies or whatever
but anyway does this kind of logic still belongs to scheduler or it should be somewhere else?
I’m trying to understand which logic should be in scheduler and what should go somewhere
else.

 

Many thanks, 

Petr

 





 

-- 

Best Regards,

Haosdent Huang

 


Mime
View raw message