spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mich Talebzadeh <>
Subject Re: Executors and Cores
Date Mon, 16 May 2016 06:45:45 GMT
Hi Pradeep,

Resources allocated for each Spark app can be capped to allow a balanced
resourcing for all apps. However, you really need to monitor each app.

One option would be to use jmonitor package to look at resource usage
(heap, CPU, memory etc) for each job.

In general you should not allocate too much for each job and FIFO is the
default scheduling.

If you are allocating resources then you need to cap it

${SPARK_HOME}/bin/spark-submit \

                --master local[2] \

                --driver-memory 4g \

                --num-executors=1 \

                --executor-memory=4G \

                --executor-cores=2 \


Don't over allocate resources as they will be wasted.

Spark GUI on 4040 can be useful but only displays the FIFO job picked up so
you wont see other jobs until the JVM that using Port 4040 is completed or

Start by identify Spark Jobs through jps. They will show up as SparkSubmit


Dr Mich Talebzadeh

LinkedIn *

On 15 May 2016 at 13:19, <> wrote:

> Hi ,
> I have seen multiple videos on spark tuning which shows how to determine #
> cores, #executors and memory size of the job.
> In all that I have seen, it seems each job has to be given the max
> resources allowed in the cluster.
> How do we factor in input size as well? I am processing a 1gb compressed
> file then I can live with say 10 executors and not 21 etc..
> Also do we consider other jobs in the cluster that could be running? I
> will use only 20 GB out of available 300 gb etc..
> Thanks,
> Pradeep
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

View raw message