spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Charles Feduke <>
Subject Re: spark on ec2
Date Fri, 06 Feb 2015 04:13:54 GMT
I don't see anything that says you must explicitly restart them to load the
new settings, but usually there is some sort of signal trapped [or brute
force full restart] to get a configuration reload for most daemons. I'd
take a guess and use the $SPARK_HOME/sbin/{stop,start} scripts on
your master node and see. (

I just tested this out on my integration EC2 cluster and got odd results
for stopping the workers (no workers found) but the start script... seemed
to work. My integration cluster was running and functioning after executing
both scripts, but I also didn't make any changes to spark-env either.

On Thu Feb 05 2015 at 9:49:49 PM Kane Kim <> wrote:

> Hi,
> I'm trying to change setting as described here:
> Then I ran  ~/spark-ec2/copy-dir /root/spark/conf to distribute to
> slaves, but without any effect. Do I have to restart workers?
> How to do that with spark-ec2?
> Thanks.

View raw message