aurora-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bill Farner <wfar...@apache.org>
Subject Re: Aurora roles vs Slave roles and how that relates to slave configuration --resource
Date Wed, 24 Sep 2014 16:03:57 GMT
The role declared on the slave --resources argument is associated with the
role declared in FrameworkInfo [1].  For the resources you have, only a
framework with the role 'kafka' will receive the resources tagged with
'kafka'.  From mesos.proto:

The role field is used to group frameworks for allocation decisions,
> depending on the allocation policy being used.


While the overloaded term is unfortunate, the 'role' defined by Aurora is a
user that owns jobs, and the unix account we run user processes as.

Aurora has support for 'dedicated hosts' (currently undocumented) to
support a use case like yours.  This behaves somewhat as you describe, with
the exception that we will not currently allow non-dedicated tasks to land
on dedicated hosts (though this would be a relatively small feature to add).

You may also be interested in support for 'persistent tasks' in mesos,
which should more formally address this use case.  The design proposal for
this is likely to hit the mesos-dev list over the next few weeks.

[1]
https://github.com/apache/mesos/blob/1453a477511c8f6f22ff16e3dd13d0532e019c5b/include/mesos/mesos.proto#L97-L128


-=Bill

On Tue, Sep 23, 2014 at 7:16 PM, Joe Stein <joe.stein@stealth.ly> wrote:

> Hey Brian,
>
> Lets say I have X machines that have been set aside that have a
> "characteristic" for running Apache Kafka.
>
> The hardware can do a lot more than I need/want for Kafka but is within the
> "pool" of elasticity (so not every server but a good chunk of them
> (including not production))
>
> So, I set one of these slaves like this.
>
> --resources="cpus(kafka):16; cpus(*):16; mem(kafka):48000; mem(*):248000;
> disk(kafka):7105340; disk(kafka):1105340;
>
> This gets around the problem of a Kafka broker going down and knowing we
> have resources to be able to restore the broker to that machine ( with the
> same broker.id and read the data that was there (like after the server
> crashes and comes back up)).  We could make sure that we keep running
> brokers on Kafka deemed servers while also allowing other stuff to run on
> it too.
>
>
> /*******************************************
>  Joe Stein
>  Founder, Principal Consultant
>  Big Data Open Source Security LLC
>  http://www.stealth.ly
>  Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> ********************************************/
>
> On Tue, Sep 23, 2014 at 8:31 PM, Brian Wickman <wickman@twopensource.com>
> wrote:
>
> > They are unrelated.  That is not to say they shouldn't be supported.  Any
> > particular use-cases that come to mind?
> >
> > On Tue, Sep 23, 2014 at 3:44 PM, Joe Stein <joe.stein@stealth.ly> wrote:
> >
> > > Hi, I was looking at resource role allocation (slave configuration
> > > --resource) and was wondering how this relates to Aurora and if the
> role
> > > there on the slave is the same as Aurora's role in the job key?
> > >
> > > e.g.
> > >
> > > --resources="cpus(prod):8; cpus(stage):2 mem(*):15360; disk(*):710534;
> > > ports(*):[31000-32000]"
> > >
> > > and
> > >
> > > cluster1/tyg/prod/workhorse
> > >
> > > I ask because the Aurora role is user but the slave role seems to be
> more
> > > like what the environment namespace components in Aurora is for?
> > Something
> > > else?
> > >
> > > Thanks!
> > >
> > > /*******************************************
> > >  Joe Stein
> > >  Founder, Principal Consultant
> > >  Big Data Open Source Security LLC
> > >  http://www.stealth.ly
> > >  Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> > > ********************************************/
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message