spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matei Zaharia <matei.zaha...@gmail.com>
Subject Re: fair scheduler
Date Sun, 10 Aug 2014 22:49:16 GMT
Hi Crystal,

The fair scheduler is only for jobs running concurrently within the same SparkContext (i.e.
within an application), not for separate applications on the standalone cluster manager. It
has no effect there. To run more of those concurrently, you need to set a cap on how many
cores they each grab with spark.cores.max.

Matei

On August 10, 2014 at 12:13:08 PM, 李宜芳 (xuite627@gmail.com) wrote:

Hi  

I am trying to switch from FIFO to FAIR with standalone mode.  

my environment:  
hadoop 1.2.1  
spark 0.8.0 using stanalone mode  

and i modified the code..........  

ClusterScheduler.scala -> System.getProperty("spark.scheduler.mode",  
"FAIR"))  
SchedulerBuilder.scala ->  
val DEFAULT_SCHEDULING_MODE = SchedulingMode.FAIR  

LocalScheduler.scala ->  
System.getProperty("spark.scheduler.mode", "FAIR)  

spark-env.sh ->  
export SPARK_JAVA_OPTS="-Dspark.scheduler.mode=FAIR"  
export SPARK_JAVA_OPTS=" -Dspark.scheduler.mode=FAIR" ./run-example  
org.apache.spark.examples.SparkPi spark://streaming1:7077  


but it's not work  
i want to switch from fifo to fair  
how can i do??  

Regards  
Crystal Lee  

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message