spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From BryanCutler <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-12631] [PYSPARK] [DOC] PySpark clusteri...
Date Fri, 29 Jan 2016 18:29:23 GMT
Github user BryanCutler commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10610#discussion_r51296249
  
    --- Diff: mllib/src/main/scala/org/apache/spark/mllib/clustering/KMeans.scala ---
    @@ -482,12 +482,15 @@ object KMeans {
       /**
        * Trains a k-means model using the given set of parameters.
        *
    -   * @param data training points stored as `RDD[Vector]`
    -   * @param k number of clusters
    -   * @param maxIterations max number of iterations
    -   * @param runs number of parallel runs, defaults to 1. The best model is returned.
    -   * @param initializationMode initialization model, either "random" or "k-means||" (default).
    -   * @param seed random seed value for cluster initialization
    +   * @param data Train with a `RDD[Vector]` of data points.
    --- End diff --
    
    Sorry, I was sure I had that right, but you are correct.  I can see my high school english
teacher shaking her head in dissapointment!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message