spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From JoshRosen <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-7826][CORE] Suppress extra calling getC...
Date Tue, 26 May 2015 18:45:05 GMT
Github user JoshRosen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/6352#discussion_r31065901
  
    --- Diff: core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
    @@ -342,6 +342,35 @@ class DAGSchedulerSuite
         assert(locs === Seq(Seq("hostA", "hostB"), Seq("hostB", "hostC"), Seq("hostC", "hostD")))
       }
     
    +  /**
    +   * +---+ shuffle +---+    +---+
    +   * | A |<--------| B |<---| C |<--+
    +   * +---+         +---+    +---+   |  +---+
    +   *                                +--| E |
    +   *                        +---+   |  +---+
    +   *                        | D |<--+
    +   *                        +---+
    +   * Here, E has one-to-one dependencies on C and D. C is derived from A by performing
a shuffle
    +   * and then a map. If we're trying to determine which ancestor stages need to be computed
in
    +   * order to compute E, we need to figure out whether the shuffle A -> B should be
performed.
    +   * If the RDD C, which has only one ancestor via a narrow dependency, is cached, then
we won't
    +   * need to compute A, even if it has some unavailable output partitions. The same goes
for B:
    +   * if B is 100% cached, then we can avoid the shuffle on A.
    +   */
    +  test("SPARK-7826: regression test for getMissingParentStages") {
    +    val rddA = new MyRDD(sc, 1, Nil)
    +    val rddB = new MyRDD(sc, 1, List(new ShuffleDependency(rddA, null)))
    +    val rddC = new MyRDD(sc, 1, List(new OneToOneDependency(rddB))).cache()
    +    val rddD = new MyRDD(sc, 1, Nil)
    +    val rddE = new MyRDD(sc, 1,
    +      List(new OneToOneDependency(rddC), new OneToOneDependency(rddD)))
    +    cacheLocations(rddC.id -> 0) =
    +      Seq(makeBlockManagerId("hostA"), makeBlockManagerId("hostB"))
    +    val jobId = submit(rddE, Array(0))
    +    val finalStage = scheduler.jobIdToActiveJob(jobId).finalStage
    +    assert(scheduler.getMissingParentStages(finalStage).size === 0)
    --- End diff --
    
    Since there's a one-to-one dependency from D to E, won't D and E be computed in the same
stage?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message