mesos-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joe Stein <>
Subject Re: HDFS with Mesos Slave for Executor
Date Mon, 17 Feb 2014 20:04:46 GMT
Hi Niklas, I am trying to understand the following type of scenario.

For the Executor for a Kafka broker much just staring out )
I can just launch task (I believe in
progressing now trying to make that work) by just doing a
KafkaServer.start() from with the Executor.

However for Consumer (and possibly to the same exent for producers though
there are more nuances there) I do not think the Framework will have the
luxury of building that.

So my question is two fold really

1) How does the Executor compiled within the framework/schedular jar
running (lets say on master) get to all of the slave nodes?
2) for jar files that just need to be executed (looking like this is going
to be best by issuing a command but if I can do something else that might
be good) where / how does the slave know where to launch that from.

I am missing the part where mesos is getting what the slave needs to-do
execution of code... having gone through Aurora, Marathon, MesosHadoop,
StormHadoop and the different mesosphere Scala examples it is not clear how
this is going to work (for what I am doing the MesosHadoop is the closet to
what I am trying to-do with Apache Kafka).

If it all just works magically under the hood for me that is great but want
to peek behind and understand the magic :)  or if there is something I have
to-do (like dod I have to install HDFS on every slave and then do something
with the paths or something.

It is just not clear and something I am trying to figure out but keep
running into a wall... I wish I had more time to work on this but for the
time I do get and get to work on this I keep stumbling into these type of
things so any help is VERY much appreciated. Thanks!!!!

 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 Twitter: @allthingshadoop <>

On Mon, Feb 17, 2014 at 12:27 PM, Niklas Quarfot Nielsen <> wrote:

> Hey Joe,
> A lot of this code was rewritten in connection with the new containerizer
> API.
> But, try to take a look at src/launcher/fetcher.cpp; this is where the
> hdfs URIs are being fetched and extracted.
> If you are looking for the point where the executor is launched,
> Framework::launchExecutor() in slave.cpp (along with
> Containerizer::launch()) might be another place to look.
> Is it a particular problem you are running into?
> I think Ian can elaborate too.
> Cheers,
> Niklas
> On February 16, 2014 at 9:23:47 AM, Joe Stein (<//>)
> wrote:
> Hi, I am struggling to see the entry point where the slave will be able to
> launch an executor (and whatever other files/configs needed for what it is
> executing) if every slave has HDFS running on it ??? or is there another
> different way to-do this ???
> /*******************************************
> Joe Stein
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> Twitter: @allthingshadoop <>
> ********************************************/

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message