spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From MLnick <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-15100][DOC] Modified user guide and exa...
Date Wed, 25 May 2016 06:54:34 GMT
Github user MLnick commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13176#discussion_r64523300
  
    --- Diff: docs/ml-features.md ---
    @@ -1098,9 +1098,9 @@ for more details on the API.
     
     `QuantileDiscretizer` takes a column with continuous features and outputs a column with
binned
     categorical features. The number of bins is set by the `numBuckets` parameter.
    -The bin ranges are chosen using an approximate algorithm (see the documentation for [approxQuantile](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/DataFrameStatFunctions.scala)
    +The bin ranges are chosen using an approximate algorithm (see the documentation for [approxQuantile](api/scala/index.html#org.apache.spark.sql.DataFrameStatFunctions.scala)
     for a detailed description). The precision of the approximation can be controlled with
the
    -`relativeError` parameter. When set to zero, exact quantiles are calculated.
    +`relativeError` parameter. When set to zero, exact quantiles are calculated. Computing
exact quantiles is an expensive operation.
     The lower and upper bin bounds will be `-Infinity` and `+Infinity` covering all real
values.
     
     **Examples**
    --- End diff --
    
    @GayathriMurali are you sure about that? Because I get this:
    
    ```
    scala> import org.apache.spark.ml.feature.QuantileDiscretizer
    import org.apache.spark.ml.feature.QuantileDiscretizer
    
    scala> val data = Array((0, 18.0), (1, 19.0), (2, 8.0), (3, 5.0), (4, 2.2))
    data: Array[(Int, Double)] = Array((0,18.0), (1,19.0), (2,8.0), (3,5.0), (4,2.2))
    
    scala> val df = spark.createDataFrame(data).toDF("id", "hour")
    df: org.apache.spark.sql.DataFrame = [id: int, hour: double]
    
    scala> val discretizer = new QuantileDiscretizer().setInputCol("hour").setOutputCol("result").setNumBuckets(3)
    discretizer: org.apache.spark.ml.feature.QuantileDiscretizer = quantileDiscretizer_c6622394ff70
    
    scala> discretizer.fit(df).transform(df).show
    +---+----+------+
    | id|hour|result|
    +---+----+------+
    |  0|18.0|   2.0|
    |  1|19.0|   2.0|
    |  2| 8.0|   2.0|
    |  3| 5.0|   2.0|
    |  4| 2.2|   1.0|
    +---+----+------+
    
    
    scala> discretizer.setRelativeError(0).fit(df).transform(df).show
    +---+----+------+
    | id|hour|result|
    +---+----+------+
    |  0|18.0|   2.0|
    |  1|19.0|   2.0|
    |  2| 8.0|   2.0|
    |  3| 5.0|   1.0|
    |  4| 2.2|   0.0|
    +---+----+------+
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message