spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From WeichenXu123 <>
Subject [GitHub] spark pull request #19904: [SPARK-22707][ML] Optimize CrossValidator memory ...
Date Fri, 08 Dec 2017 07:16:16 GMT
Github user WeichenXu123 commented on a diff in the pull request:
    --- Diff: mllib/src/main/scala/org/apache/spark/ml/tuning/CrossValidator.scala ---
    @@ -146,25 +147,18 @@ class CrossValidator @Since("1.2.0") (@Since("1.4.0") override val
uid: String)
           val validationDataset = sparkSession.createDataFrame(validation, schema).cache()
           logDebug(s"Train split $splitIndex with multiple sets of parameters.")
    +      val completeFitCount = new AtomicInteger(0)
    --- End diff --
    We hope to unpersist training dataset once all fitting finished. But your idea here will
postpone the unpersist time until all fitting & evaluation done. and your code should
have the same effect with:
    val modelFutures = ...
    val foldMetrics =, Duration.Inf))
    and, about what you said:
    > "Now, the unpersist operation will happen in one of the training threads, instead
of asynchronously in its own thread. "
    What's the possible effect or impact ? `trainingDataset.unpersist()` itself is a async
method and won't block. So will it have some effect ? I think it can be put in any thread


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message