spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From BryanCutler <>
Subject [GitHub] spark pull request: [SPARK-12631] [PYSPARK] [DOC] PySpark clusteri...
Date Fri, 29 Jan 2016 18:29:23 GMT
Github user BryanCutler commented on a diff in the pull request:
    --- Diff: mllib/src/main/scala/org/apache/spark/mllib/clustering/KMeans.scala ---
    @@ -482,12 +482,15 @@ object KMeans {
        * Trains a k-means model using the given set of parameters.
    -   * @param data training points stored as `RDD[Vector]`
    -   * @param k number of clusters
    -   * @param maxIterations max number of iterations
    -   * @param runs number of parallel runs, defaults to 1. The best model is returned.
    -   * @param initializationMode initialization model, either "random" or "k-means||" (default).
    -   * @param seed random seed value for cluster initialization
    +   * @param data Train with a `RDD[Vector]` of data points.
    --- End diff --
    Sorry, I was sure I had that right, but you are correct.  I can see my high school english
teacher shaking her head in dissapointment!

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message