spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Apache Spark (JIRA)" <j...@apache.org>
Subject [jira] [Assigned] (SPARK-24609) PySpark/SparkR doc doesn't explain RandomForestClassifier.featureSubsetStrategy well
Date Tue, 17 Jul 2018 09:13:00 GMT

     [ https://issues.apache.org/jira/browse/SPARK-24609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Apache Spark reassigned SPARK-24609:
------------------------------------

    Assignee: Apache Spark

> PySpark/SparkR doc doesn't explain RandomForestClassifier.featureSubsetStrategy well
> ------------------------------------------------------------------------------------
>
>                 Key: SPARK-24609
>                 URL: https://issues.apache.org/jira/browse/SPARK-24609
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 2.3.1
>            Reporter: Xiangrui Meng
>            Assignee: Apache Spark
>            Priority: Major
>
> In Scala doc ([https://spark.apache.org/docs/2.3.0/api/scala/index.html#org.apache.spark.ml.classification.RandomForestClassifier)],
we have:
>  
> {quote}The number of features to consider for splits at each tree node. Supported options:
>  * "auto": Choose automatically for task: If numTrees == 1, set to "all." If numTrees
> 1 (forest), set to "sqrt" for classification and to "onethird" for regression.
>  * "all": use all features
>  * "onethird": use 1/3 of the features
>  * "sqrt": use sqrt(number of features)
>  * "log2": use log2(number of features)
>  * "n": when n is in the range (0, 1.0], use n * number of features. When n is in the
range (1, number of features), use n features. (default = "auto")
> These various settings are based on the following references:
>  * log2: tested in Breiman (2001)
>  * sqrt: recommended by Breiman manual for random forests
>  * The defaults of sqrt (classification) and onethird (regression) match the R randomForest
package.{quote}
>  
> The entire paragraph is missing in PySpark doc ([https://spark.apache.org/docs/2.3.0/api/python/pyspark.ml.html#pyspark.ml.classification.RandomForestClassifier.featureSubsetStrategy]).
And same issue for SparkR (https://github.com/apache/spark/blob/master/R/pkg/R/mllib_tree.R#L365).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message