mesos-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pradeep Chhetri <pradeep.chhetr...@gmail.com>
Subject Re: Apache Spark Over Mesos
Date Tue, 15 Mar 2016 16:33:50 GMT
Just to be clean:

I am already setting spark.mesos.executor.docker.image to a docker image. I
am starting the Spark Dispatcher over marathon using that image. While
submitting jobs using spark-submit, mesos is able to start the driver using
the same docker image but the tasks are starting as plain java process.

$ cat conf/spark-defaults.conf

spark.mesos.coarse: false
spark.mesos.executor.docker.image: docker-registry/mesos-spark:master-12
spark.mesos.mesosExecutor.cores: 0.25
spark.mesos.executor.home: /opt/spark
spark.mesos.uris: file:///etc/docker.tar.gz

Thanks

On Tue, Mar 15, 2016 at 4:24 PM, Rad Gruchalski <radek@gruchalski.com>
wrote:

> As Tim suggested: spark.mesos.executor.docker.image is your friend.
>
> Kind regards,
> Radek Gruchalski
> radek@gruchalski.com <radek@gruchalski.com>
> de.linkedin.com/in/radgruchalski/
>
>
> *Confidentiality:*This communication is intended for the above-named
> person and may be confidential and/or legally privileged.
> If it has come to you in error you must take no action based on it, nor
> must you copy or show it to anyone; please delete/destroy and inform the
> sender immediately.
>
> On Tuesday, 15 March 2016 at 17:23, Pradeep Chhetri wrote:
>
> Hello Radoslaw,
>
> Thank you for the quick reply. Few questions:
>
> 1) Do you mean mounting spark artifacts as a volume on each mesos agent
> node?  This means number of volumes = number of mesos agents.
>
> 2) Since I am not using HDFS at all, that is definitely not an option for
> me.
>
> Isn't there a way to just launch the spark tasks also as docker containers
> which are self contained with spark artifacts ?
>
> Thanks.
>
> On Tue, Mar 15, 2016 at 3:49 PM, Radoslaw Gruchalski <radek@gruchalski.com
> > wrote:
>
> Pradeep,
>
> You can mount a spark directory as a volume. This means you have to have
> spark deployed on every agent.
>
> Another thing you can do, place spark in hdfs, assuming that you have hdfs
> available but that too will download a copy to the sandbox.
>
> I'd prefer the former.
>
> Sent from Outlook Mobile <https://aka.ms/qtex0l>
>
> _____________________________
> From: Pradeep Chhetri <pradeep.chhetri89@gmail.com>
> Sent: Tuesday, March 15, 2016 4:41 pm
> Subject: Apache Spark Over Mesos
> To: <user@mesos.apache.org>
>
>
>
> Hello,
>
> I am able to run Apache Spark over Mesos. Its quite simple to run Spark
> Dispatcher over marathon and ask it to run Spark Executor (I guess also can
> be called as Spark Driver) as docker container.
>
> I have a query regarding this:
>
> All spark tasks are spawned directly by first downloading the spark
> artifacts. I was thinking if there is some way I can start them too as
> docker containers. This will save the time for downloading the spark
> artifacts. I am running spark in fine-grained mode.
>
> I have attached a screenshot of a sample job
>
>
> ‚Äč
> Thanks,
>
> --
> Pradeep Chhetri
>
>
>
>
>
> --
> Pradeep Chhetri
>
>
>


-- 
Pradeep Chhetri

Mime
View raw message