spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michel Dufresne <sparkhealthanalyt...@gmail.com>
Subject Setting JVM options to Spark executors in Standalone mode
Date Fri, 16 Jan 2015 17:56:08 GMT
Hi All,

I'm trying to set some JVM options to the executor processes in a
standalone cluster. Here's what I have in *spark-env.sh*:

jmx_opt="-Dcom.sun.management.jmxremote"
> jmx_opt="${jmx_opt} -Djava.net.preferIPv4Stack=true"
> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.port=9999"
> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.rmi.port=9998"
> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.ssl=false"
> jmx_opt="${jmx_opt} -Dcom.sun.management.jmxremote.authenticate=false"
> jmx_opt="${jmx_opt} -Djava.rmi.server.hostname=${SPARK_PUBLIC_DNS}"
> export SPARK_WORKER_OPTS="${jmx_opt}"


However the option are showing up on the *daemon* JVM not the *workers*. It
has the same effect as if I was using SPARK_DAEMON_JAVA_OPTS (which should
set it on the daemon process).

Thanks in advance for your help,

Michel

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message