spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From WeichenXu123 <...@git.apache.org>
Subject [GitHub] spark pull request #19904: [SPARK-22707][ML] Optimize CrossValidator memory ...
Date Fri, 08 Dec 2017 07:16:16 GMT
Github user WeichenXu123 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19904#discussion_r155715665
  
    --- Diff: mllib/src/main/scala/org/apache/spark/ml/tuning/CrossValidator.scala ---
    @@ -146,25 +147,18 @@ class CrossValidator @Since("1.2.0") (@Since("1.4.0") override val
uid: String)
           val validationDataset = sparkSession.createDataFrame(validation, schema).cache()
           logDebug(s"Train split $splitIndex with multiple sets of parameters.")
     
    +      val completeFitCount = new AtomicInteger(0)
    --- End diff --
    
    We hope to unpersist training dataset once all fitting finished. But your idea here will
postpone the unpersist time until all fitting & evaluation done. and your code should
have the same effect with:
    ```
    val modelFutures = ...
    val foldMetrics = modelFutures.map(ThreadUtils.awaitResult(_, Duration.Inf))
    trainingDataset.unpersist()
    validationDataset.unpersist()
    ```
    
    and, about what you said:
    > "Now, the unpersist operation will happen in one of the training threads, instead
of asynchronously in its own thread. "
    
    What's the possible effect or impact ? `trainingDataset.unpersist()` itself is a async
method and won't block. So will it have some effect ? I think it can be put in any thread
safely.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message