spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From pcsenthil <pcsent...@gmail.com>
Subject Spark Java Configuration.
Date Tue, 02 Sep 2014 14:02:48 GMT
Team,

I am new to Apache Spark and I didn't have much knowledge on hadoop or big
data. I need clarifications on the below,

How does Spark Configuration works, from a tutorial i got the below 

/SparkConf conf = new SparkConf().setAppName("Simple application")
                            .setMaster("local[4]"); 	
    JavaSparkContext java_SC = new JavaSparkContext(conf);/

from this, i understood that we are providing the config through java
program to Spark.
Let us assume i have written this in a separate java method.

My question are

what happen if i am keep on calling this?
If this one will will keep on creating new objects for spark on each call,
then how we are going to handle the JVM memory? Since under each object i am
trying to run 4 concurrent threads?
Is there any option to find existing one in JVM, so instead of creating new
Spark object i can go with it?

Please help me on this.



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Java-Configuration-tp13269.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message