spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sophia <sln-1...@163.com>
Subject Re: How to run shark?
Date Wed, 14 May 2014 07:19:01 GMT
My configuration is just like this,the slave's node has been configuate,but I
donnot know what's happened to the shark?Can you help me Sir?
shark-env.sh
export SPARK_USER_HOME=/root
export SPARK_MEM=2g
export SCALA_HOME="/root/scala-2.11.0-RC4"
export SHARK_MASTER_MEM=1g
export HIVE_CONF_DIR="/usr/lib/hive/conf"
export HIVE_HOME="/usr/lib/hive"
export HADOOP_HOME="/usr/lib/hadoop"
export SPARK_HOME="/root/spark-0.9.1"
export MASTER="spark://192.168.10.220:7077"
export SHARK_EXEC_MODE=yarn

SPARK_JAVA_OPTS=" -Dspark.local.dir=/tmp "
SPARK_JAVA_OPTS+="-Dspark.kryoserializer.buffer.mb=10 "
SPARK_JAVA_OPTS+="-verbose:gc -XX:-PrintGCDetails -XX:+PrintGCTimeStamps "
export SPARK_JAVA_OPTS
export
SPARK_ASSEMBLY_JAR="/root/spark-0.9.1/assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop2.2.0.jar"
export
SHARK_ASSEMBLY_JAR="/root/shark-0.9.1-bin-hadoop2/target/scala-2.10/shark_2.10-0.9.1.jar"

Best regards,



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/How-to-run-shark-tp5581p5688.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Mime
View raw message