spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Peter Rudenko (JIRA)" <j...@apache.org>
Subject [jira] [Closed] (SPARK-5807) Parallel grid search
Date Sat, 14 Feb 2015 11:00:12 GMT

     [ https://issues.apache.org/jira/browse/SPARK-5807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Peter Rudenko closed SPARK-5807.
--------------------------------
    Resolution: Won't Fix

Never mind. Found a better solution:

{code}
override def fit(dataset: SchemaRDD, paramMaps: Array[ParamMap]): Seq[PipelineModel] = {
    if (parralel) {
      //Running first paramMap sequentially to cache data and other in parrallel
      Seq(fit(dataset, paramMaps.head)) ++ paramMaps.tail.par.map(fit(dataset, _)).toVector
    } else {
      paramMaps.map(fit(dataset, _))
    }
}

{code}


> Parallel grid search 
> ---------------------
>
>                 Key: SPARK-5807
>                 URL: https://issues.apache.org/jira/browse/SPARK-5807
>             Project: Spark
>          Issue Type: New Feature
>          Components: ML
>    Affects Versions: 1.3.0
>            Reporter: Peter Rudenko
>            Priority: Minor
>
> Right now in CrossValidator for each fold combination and ParamGrid hyperparameter pair
it searches the best parameter sequentially. Assuming there's enough workers & memory
on a cluster to cache all training/validation folds it's possible to parallelize execution.
Here's a draft i came with:
> {code}
> val metrics = val metrics = new ArrayBuffer[Double](numModels) with mutable.SynchronizedBuffer[Double]
> val splits = MLUtils.kFold(dataset, map(numFolds), 0).zipWithIndex
> def processFold(input: ((RDD[sql.Row], RDD[sql.Row]), Int)) = input match {
>   case ((training, validation), splitIndex) => {
>     val trainingDataset = sqlCtx.applySchema(training, schema).cache()
>     val validationDataset = sqlCtx.applySchema(validation, schema).cache()
>     // multi-model training
>     logDebug(s"Train split $splitIndex with multiple sets of parameters.")
>     val models = est.fit(trainingDataset, epm).asInstanceOf[Seq[Model[_]]]
>     var i = 0
>     trainingDataset.unpersist()
>     while (i < numModels) {
>       val metric = eval.evaluate(models(i).transform(validationDataset, epm(i)), map)
>       logDebug(s"Got metric $metric for model trained with ${epm(i)}.")
>       metrics(i) += metric
>         i += 1
>     }
>     validationDataset.unpersist()
>   }
> }
> if (parallel) {
> splits.par.foreach(processFold)
> } else {
> splits.foreach(processFold)
> }
> {code}
> Assuming there's 3 folds it would redundantly cache all the combinations (pretty much
memory), so maybe it's possible to cache each fold separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message