spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From vanzin <...@git.apache.org>
Subject [GitHub] spark pull request #19848: [SPARK-22162] Executors and the driver should use...
Date Fri, 01 Dec 2017 00:05:50 GMT
Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19848#discussion_r154235818
  
    --- Diff: core/src/main/scala/org/apache/spark/mapred/SparkHadoopMapRedUtil.scala ---
    @@ -70,7 +70,8 @@ object SparkHadoopMapRedUtil extends Logging {
           if (shouldCoordinateWithDriver) {
             val outputCommitCoordinator = SparkEnv.get.outputCommitCoordinator
             val taskAttemptNumber = TaskContext.get().attemptNumber()
    -        val canCommit = outputCommitCoordinator.canCommit(jobId, splitId, taskAttemptNumber)
    +        val stageId = TaskContext.get().stageId()
    +        val canCommit = outputCommitCoordinator.canCommit(stageId, splitId, taskAttemptNumber)
    --- End diff --
    
    Shouldn't `CommitDeniedException` (below) be updated to use the stage ID also? Otherwise
the exception might have incomplete information.
    
    With that change it's possible that `jobId` might become unused in this method.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message