spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Josh Rosen (JIRA)" <j...@apache.org>
Subject [jira] [Created] (SPARK-11943) Rapidly starting and stopping SparkContexts in local-cluster mode may cause JVM to exit
Date Tue, 24 Nov 2015 07:42:10 GMT
Josh Rosen created SPARK-11943:
----------------------------------

             Summary: Rapidly starting and stopping SparkContexts in local-cluster mode may
cause JVM to exit
                 Key: SPARK-11943
                 URL: https://issues.apache.org/jira/browse/SPARK-11943
             Project: Spark
          Issue Type: Bug
          Components: Tests
            Reporter: Josh Rosen
            Priority: Minor


While trying to performance-profile the creation of SparkContexts in {{local-cluster}} mode,
I ran into a weird bug which could cause {{System.exit()}} to be called. In {{local-cluster}}
mode, the Master and Worker processes are run in the driver JVM. Inside of Master and Worker,
there are error-handling paths which call {{System.exit()}} in certain cases. If you start
and stop SparkContexts very quickly in {{local-cluster}} mode, then I think you can hit race
conditions which cause the {{System.exit()}} error-handling paths to be hit.

Here's a test case which consistently reproduces the problem on my laptop:

{code}
class LocalClusterLaunchTimeBenchmark extends SparkFunSuite with LocalSparkContext with Logging
{
  test("benchmark local cluster launching and teardown") {
    for (i <- 1 to 100) {
      val startTime = System.currentTimeMillis()
      val conf = new SparkConf()
        .set("spark.ui.enabled", "false")
        .set("spark.testing", "true")
        .set("spark.master.rest.enabled", "false")
      sc = new SparkContext("local-cluster[2,1,1024]", s"test-$i", conf)
      log.info("--STOPPING--")
      sc.stop()
      log.info("--STOPPED--")
      val duration = startTime - System.currentTimeMillis()
      println("Duration" + duration)
    }
  }
}
{code}

I'm not sure whether this underlying problem has led to flakiness in Jenkins. One test which
would be susceptible to this is DistributedSuite's "local-cluster format" test, which winds
up spinning up a bunch of SparkContexts to validate the parsing of {{local-cluster}} patterns:
https://github.com/apache/spark/blob/4021a28ac30b65cb61cf1e041253847253a2d89f/core/src/test/scala/org/apache/spark/DistributedSuite.scala#L52.
We shouldn't be testing this functionality by creating new contexts, so this test should be
refactored anyways, but the Master/Worker System.exit() would explain things if we've ever
observed this test to fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message