spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mengxr <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-5013] [MLlib] [WIP] Added documentation...
Date Fri, 06 Feb 2015 01:27:18 GMT
Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4401#discussion_r24216731
  
    --- Diff: docs/mllib-clustering.md ---
    @@ -168,6 +187,112 @@ print("Within Set Sum of Squared Error = " + str(WSSSE))
     
     </div>
     
    +#### GaussianMixture
    +
    +<div class="codetabs">
    +<div data-lang="scala" markdown="1">
    +In the following example after loading and parsing data, we use a
    +[`GaussianMixture`](api/scala/index.html#org.apache.spark.mllib.clustering.GaussianMixture)

    +object to cluster the data into two clusters. The number of desired clusters is passed

    +to the algorithm. We then output the parameters of the mixture model.
    +
    +{% highlight scala %}
    +import org.apache.spark.mllib.clustering.GaussianMixture
    +import org.apache.spark.mllib.linalg.Vectors
    +
    +// Load and parse the data
    +val data = sc.textFile("data/mllib/gmm_data.txt")
    +val parsedData = data.map(s => Vectors.dense(s.trim.split(' ').map(_.toDouble))).cache()
    +
    +// Cluster the data into two classes using GaussianMixture
    +val gmm = new GaussianMixture().setK(2).run(parsedData)
    +
    +// output parameters of max-likelihood model
    +for (i <- 0 until gmm.k) {
    +  println("weight=%f\nmu=%s\nsigma=\n%s\n" format 
    +    (gmm.weights(i), gmm.gaussians(i).mu, gmm.gaussians(i).sigma))
    +}
    +
    +{% endhighlight %}
    +</div>
    +
    +<div data-lang="java" markdown="1">
    +All of MLlib's methods use Java-friendly types, so you can import and call them there
the same
    +way you do in Scala. The only caveat is that the methods take Scala RDD objects, while
the
    +Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD to a Scala
one by
    +calling `.rdd()` on your `JavaRDD` object. A self-contained application example
    +that is equivalent to the provided example in Scala is given below:
    +
    +{% highlight java %}
    +import org.apache.spark.api.java.*;
    +import org.apache.spark.api.java.function.Function;
    +import org.apache.spark.mllib.clustering.GaussianMixture;
    +import org.apache.spark.mllib.clustering.GaussianMixtureModel;
    +import org.apache.spark.mllib.linalg.Vector;
    +import org.apache.spark.mllib.linalg.Vectors;
    +import org.apache.spark.SparkConf;
    +
    +public class GaussianMixtureExample {
    +  public static void main(String[] args) {
    +    SparkConf conf = new SparkConf().setAppName("GaussianMixture Example");
    +    JavaSparkContext sc = new JavaSparkContext(conf);
    +
    +    // Load and parse data
    +    String path = "data/mllib/gmm_data.txt";
    +    JavaRDD<String> data = sc.textFile(path);
    +    JavaRDD<Vector> parsedData = data.map(
    +      new Function<String, Vector>() {
    +        public Vector call(String s) {
    +          String[] sarray = s.trim().split(" ");
    +          double[] values = new double[sarray.length];
    +          for (int i = 0; i < sarray.length; i++)
    +            values[i] = Double.parseDouble(sarray[i]);
    +          return Vectors.dense(values);
    +        }
    +      }
    +    );
    +    parsedData.cache();
    +
    +    // Cluster the data into two classes using GaussianMixture
    +    GaussianMixtureModel gmm = new GaussianMixture().setK(2).run(parsedData.rdd());
    +
    +    // Output the parameters of the mixture model
    +    for(int j=0; j<gmm.k(); j++) {
    +        System.out.println("weight=%f\nmu=%s\nsigma=\n%s\n",
    +            gmm.weights()[j], gmm.gaussians()[j].mu(), gmm.gaussians()[j].sigma());
    +    }
    +  }
    +}
    +{% endhighlight %}
    +</div>
    +
    +<div data-lang="python" markdown="1">
    +In the following example after loading and parsing data, we use a
    +[`GaussianMixture`](api/scala/index.html#org.apache.spark.mllib.clustering.GaussianMixture)

    +object to cluster the data into two clusters. The number of desired clusters is passed

    +to the algorithm. We then output the parameters of the mixture model.
    +
    +{% highlight python %}
    +from pyspark.mllib.clustering import GaussianMixture
    +from numpy import array
    +
    +# Load and parse the data
    +data = sc.textFile("data/mllib/gmm_data.txt")
    +parsedData = data.map(lambda line: array([float(x) for x in line.strip().split(' ')]))
    +
    +# Build the model (cluster the data)
    +gmm = new GaussianMixture.train(parsedData, 2)
    --- End diff --
    
    remove `new`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message