spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From m...@apache.org
Subject git commit: [SPARK-3081][MLLIB] rename RandomRDDGenerators to RandomRDDs
Date Sat, 16 Aug 2014 22:14:49 GMT
Repository: spark
Updated Branches:
  refs/heads/master 7e70708a9 -> ac6411c6e


[SPARK-3081][MLLIB] rename RandomRDDGenerators to RandomRDDs

`RandomRDDGenerators` means factory for `RandomRDDGenerator`. However, its methods return RDDs but not RDDGenerators. So a more proper (and shorter) name would be `RandomRDDs`.

dorx brkyvz

Author: Xiangrui Meng <meng@databricks.com>

Closes #1979 from mengxr/randomrdds and squashes the following commits:

b161a2d [Xiangrui Meng] rename RandomRDDGenerators to RandomRDDs


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/ac6411c6
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/ac6411c6
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/ac6411c6

Branch: refs/heads/master
Commit: ac6411c6e75906997c78de23dfdbc8d225b87cfd
Parents: 7e70708
Author: Xiangrui Meng <meng@databricks.com>
Authored: Sat Aug 16 15:14:43 2014 -0700
Committer: Xiangrui Meng <meng@databricks.com>
Committed: Sat Aug 16 15:14:43 2014 -0700

----------------------------------------------------------------------
 .../spark/mllib/api/python/PythonMLLibAPI.scala |   2 +-
 .../mllib/random/RandomRDDGenerators.scala      | 493 -------------------
 .../apache/spark/mllib/random/RandomRDDs.scala  | 493 +++++++++++++++++++
 .../mllib/random/RandomRDDGeneratorsSuite.scala | 160 ------
 .../spark/mllib/random/RandomRDDsSuite.scala    | 160 ++++++
 python/pyspark/mllib/random.py                  |  25 +-
 6 files changed, 666 insertions(+), 667 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/ac6411c6/mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala
----------------------------------------------------------------------
diff --git a/mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala b/mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala
index 18dc087..4343124 100644
--- a/mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala
+++ b/mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala
@@ -27,7 +27,7 @@ import org.apache.spark.mllib.classification._
 import org.apache.spark.mllib.clustering._
 import org.apache.spark.mllib.optimization._
 import org.apache.spark.mllib.linalg.{Matrix, SparseVector, Vector, Vectors}
-import org.apache.spark.mllib.random.{RandomRDDGenerators => RG}
+import org.apache.spark.mllib.random.{RandomRDDs => RG}
 import org.apache.spark.mllib.recommendation._
 import org.apache.spark.mllib.regression._
 import org.apache.spark.mllib.tree.configuration.{Algo, Strategy}

http://git-wip-us.apache.org/repos/asf/spark/blob/ac6411c6/mllib/src/main/scala/org/apache/spark/mllib/random/RandomRDDGenerators.scala
----------------------------------------------------------------------
diff --git a/mllib/src/main/scala/org/apache/spark/mllib/random/RandomRDDGenerators.scala b/mllib/src/main/scala/org/apache/spark/mllib/random/RandomRDDGenerators.scala
deleted file mode 100644
index b0a0593..0000000
--- a/mllib/src/main/scala/org/apache/spark/mllib/random/RandomRDDGenerators.scala
+++ /dev/null
@@ -1,493 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.spark.mllib.random
-
-import org.apache.spark.SparkContext
-import org.apache.spark.annotation.Experimental
-import org.apache.spark.mllib.linalg.Vector
-import org.apache.spark.mllib.rdd.{RandomVectorRDD, RandomRDD}
-import org.apache.spark.rdd.RDD
-import org.apache.spark.util.Utils
-
-import scala.reflect.ClassTag
-
-/**
- * :: Experimental ::
- * Generator methods for creating RDDs comprised of i.i.d. samples from some distribution.
- */
-@Experimental
-object RandomRDDGenerators {
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD comprised of i.i.d. samples from the uniform distribution on [0.0, 1.0].
-   *
-   * To transform the distribution in the generated RDD from U[0.0, 1.0] to U[a, b], use
-   * `RandomRDDGenerators.uniformRDD(sc, n, p, seed).map(v => a + (b - a) * v)`.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param size Size of the RDD.
-   * @param numPartitions Number of partitions in the RDD.
-   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
-   * @return RDD[Double] comprised of i.i.d. samples ~ U[0.0, 1.0].
-   */
-  @Experimental
-  def uniformRDD(sc: SparkContext, size: Long, numPartitions: Int, seed: Long): RDD[Double] = {
-    val uniform = new UniformGenerator()
-    randomRDD(sc, uniform,  size, numPartitions, seed)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD comprised of i.i.d. samples from the uniform distribution on [0.0, 1.0].
-   *
-   * To transform the distribution in the generated RDD from U[0.0, 1.0] to U[a, b], use
-   * `RandomRDDGenerators.uniformRDD(sc, n, p).map(v => a + (b - a) * v)`.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param size Size of the RDD.
-   * @param numPartitions Number of partitions in the RDD.
-   * @return RDD[Double] comprised of i.i.d. samples ~ U[0.0, 1.0].
-   */
-  @Experimental
-  def uniformRDD(sc: SparkContext, size: Long, numPartitions: Int): RDD[Double] = {
-    uniformRDD(sc, size, numPartitions, Utils.random.nextLong)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD comprised of i.i.d. samples from the uniform distribution on [0.0, 1.0].
-   * sc.defaultParallelism used for the number of partitions in the RDD.
-   *
-   * To transform the distribution in the generated RDD from U[0.0, 1.0] to U[a, b], use
-   * `RandomRDDGenerators.uniformRDD(sc, n).map(v => a + (b - a) * v)`.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param size Size of the RDD.
-   * @return RDD[Double] comprised of i.i.d. samples ~ U[0.0, 1.0].
-   */
-  @Experimental
-  def uniformRDD(sc: SparkContext, size: Long): RDD[Double] = {
-    uniformRDD(sc, size, sc.defaultParallelism, Utils.random.nextLong)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD comprised of i.i.d. samples from the standard normal distribution.
-   *
-   * To transform the distribution in the generated RDD from standard normal to some other normal
-   * N(mean, sigma), use `RandomRDDGenerators.normalRDD(sc, n, p, seed).map(v => mean + sigma * v)`.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param size Size of the RDD.
-   * @param numPartitions Number of partitions in the RDD.
-   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
-   * @return RDD[Double] comprised of i.i.d. samples ~ N(0.0, 1.0).
-   */
-  @Experimental
-  def normalRDD(sc: SparkContext, size: Long, numPartitions: Int, seed: Long): RDD[Double] = {
-    val normal = new StandardNormalGenerator()
-    randomRDD(sc, normal, size, numPartitions, seed)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD comprised of i.i.d. samples from the standard normal distribution.
-   *
-   * To transform the distribution in the generated RDD from standard normal to some other normal
-   * N(mean, sigma), use `RandomRDDGenerators.normalRDD(sc, n, p).map(v => mean + sigma * v)`.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param size Size of the RDD.
-   * @param numPartitions Number of partitions in the RDD.
-   * @return RDD[Double] comprised of i.i.d. samples ~ N(0.0, 1.0).
-   */
-  @Experimental
-  def normalRDD(sc: SparkContext, size: Long, numPartitions: Int): RDD[Double] = {
-    normalRDD(sc, size, numPartitions, Utils.random.nextLong)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD comprised of i.i.d. samples from the standard normal distribution.
-   * sc.defaultParallelism used for the number of partitions in the RDD.
-   *
-   * To transform the distribution in the generated RDD from standard normal to some other normal
-   * N(mean, sigma), use `RandomRDDGenerators.normalRDD(sc, n).map(v => mean + sigma * v)`.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param size Size of the RDD.
-   * @return RDD[Double] comprised of i.i.d. samples ~ N(0.0, 1.0).
-   */
-  @Experimental
-  def normalRDD(sc: SparkContext, size: Long): RDD[Double] = {
-    normalRDD(sc, size, sc.defaultParallelism, Utils.random.nextLong)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD comprised of i.i.d. samples from the Poisson distribution with the input mean.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param mean Mean, or lambda, for the Poisson distribution.
-   * @param size Size of the RDD.
-   * @param numPartitions Number of partitions in the RDD.
-   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
-   * @return RDD[Double] comprised of i.i.d. samples ~ Pois(mean).
-   */
-  @Experimental
-  def poissonRDD(sc: SparkContext,
-      mean: Double,
-      size: Long,
-      numPartitions: Int,
-      seed: Long): RDD[Double] = {
-    val poisson = new PoissonGenerator(mean)
-    randomRDD(sc, poisson, size, numPartitions, seed)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD comprised of i.i.d. samples from the Poisson distribution with the input mean.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param mean Mean, or lambda, for the Poisson distribution.
-   * @param size Size of the RDD.
-   * @param numPartitions Number of partitions in the RDD.
-   * @return RDD[Double] comprised of i.i.d. samples ~ Pois(mean).
-   */
-  @Experimental
-  def poissonRDD(sc: SparkContext, mean: Double, size: Long, numPartitions: Int): RDD[Double] = {
-    poissonRDD(sc, mean, size, numPartitions, Utils.random.nextLong)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD comprised of i.i.d. samples from the Poisson distribution with the input mean.
-   * sc.defaultParallelism used for the number of partitions in the RDD.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param mean Mean, or lambda, for the Poisson distribution.
-   * @param size Size of the RDD.
-   * @return RDD[Double] comprised of i.i.d. samples ~ Pois(mean).
-   */
-  @Experimental
-  def poissonRDD(sc: SparkContext, mean: Double, size: Long): RDD[Double] = {
-    poissonRDD(sc, mean, size, sc.defaultParallelism, Utils.random.nextLong)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD comprised of i.i.d. samples produced by the input DistributionGenerator.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param generator DistributionGenerator used to populate the RDD.
-   * @param size Size of the RDD.
-   * @param numPartitions Number of partitions in the RDD.
-   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
-   * @return RDD[Double] comprised of i.i.d. samples produced by generator.
-   */
-  @Experimental
-  def randomRDD[T: ClassTag](sc: SparkContext,
-      generator: RandomDataGenerator[T],
-      size: Long,
-      numPartitions: Int,
-      seed: Long): RDD[T] = {
-    new RandomRDD[T](sc, size, numPartitions, generator, seed)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD comprised of i.i.d. samples produced by the input DistributionGenerator.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param generator DistributionGenerator used to populate the RDD.
-   * @param size Size of the RDD.
-   * @param numPartitions Number of partitions in the RDD.
-   * @return RDD[Double] comprised of i.i.d. samples produced by generator.
-   */
-  @Experimental
-  def randomRDD[T: ClassTag](sc: SparkContext,
-      generator: RandomDataGenerator[T],
-      size: Long,
-      numPartitions: Int): RDD[T] = {
-    randomRDD[T](sc, generator, size, numPartitions, Utils.random.nextLong)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD comprised of i.i.d. samples produced by the input DistributionGenerator.
-   * sc.defaultParallelism used for the number of partitions in the RDD.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param generator DistributionGenerator used to populate the RDD.
-   * @param size Size of the RDD.
-   * @return RDD[Double] comprised of i.i.d. samples produced by generator.
-   */
-  @Experimental
-  def randomRDD[T: ClassTag](sc: SparkContext,
-      generator: RandomDataGenerator[T],
-      size: Long): RDD[T] = {
-    randomRDD[T](sc, generator, size, sc.defaultParallelism, Utils.random.nextLong)
-  }
-
-  // TODO Generate RDD[Vector] from multivariate distributions.
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
-   * uniform distribution on [0.0 1.0].
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param numRows Number of Vectors in the RDD.
-   * @param numCols Number of elements in each Vector.
-   * @param numPartitions Number of partitions in the RDD.
-   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
-   * @return RDD[Vector] with vectors containing i.i.d samples ~ U[0.0, 1.0].
-   */
-  @Experimental
-  def uniformVectorRDD(sc: SparkContext,
-      numRows: Long,
-      numCols: Int,
-      numPartitions: Int,
-      seed: Long): RDD[Vector] = {
-    val uniform = new UniformGenerator()
-    randomVectorRDD(sc, uniform, numRows, numCols, numPartitions, seed)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
-   * uniform distribution on [0.0 1.0].
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param numRows Number of Vectors in the RDD.
-   * @param numCols Number of elements in each Vector.
-   * @param numPartitions Number of partitions in the RDD.
-   * @return RDD[Vector] with vectors containing i.i.d. samples ~ U[0.0, 1.0].
-   */
-  @Experimental
-  def uniformVectorRDD(sc: SparkContext,
-      numRows: Long,
-      numCols: Int,
-      numPartitions: Int): RDD[Vector] = {
-    uniformVectorRDD(sc, numRows, numCols, numPartitions, Utils.random.nextLong)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
-   * uniform distribution on [0.0 1.0].
-   * sc.defaultParallelism used for the number of partitions in the RDD.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param numRows Number of Vectors in the RDD.
-   * @param numCols Number of elements in each Vector.
-   * @return RDD[Vector] with vectors containing i.i.d. samples ~ U[0.0, 1.0].
-   */
-  @Experimental
-  def uniformVectorRDD(sc: SparkContext, numRows: Long, numCols: Int): RDD[Vector] = {
-    uniformVectorRDD(sc, numRows, numCols, sc.defaultParallelism, Utils.random.nextLong)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
-   * standard normal distribution.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param numRows Number of Vectors in the RDD.
-   * @param numCols Number of elements in each Vector.
-   * @param numPartitions Number of partitions in the RDD.
-   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
-   * @return RDD[Vector] with vectors containing i.i.d. samples ~ N(0.0, 1.0).
-   */
-  @Experimental
-  def normalVectorRDD(sc: SparkContext,
-      numRows: Long,
-      numCols: Int,
-      numPartitions: Int,
-      seed: Long): RDD[Vector] = {
-    val uniform = new StandardNormalGenerator()
-    randomVectorRDD(sc, uniform, numRows, numCols, numPartitions, seed)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
-   * standard normal distribution.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param numRows Number of Vectors in the RDD.
-   * @param numCols Number of elements in each Vector.
-   * @param numPartitions Number of partitions in the RDD.
-   * @return RDD[Vector] with vectors containing i.i.d. samples ~ N(0.0, 1.0).
-   */
-  @Experimental
-  def normalVectorRDD(sc: SparkContext,
-      numRows: Long,
-      numCols: Int,
-      numPartitions: Int): RDD[Vector] = {
-    normalVectorRDD(sc, numRows, numCols, numPartitions, Utils.random.nextLong)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
-   * standard normal distribution.
-   * sc.defaultParallelism used for the number of partitions in the RDD.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param numRows Number of Vectors in the RDD.
-   * @param numCols Number of elements in each Vector.
-   * @return RDD[Vector] with vectors containing i.i.d. samples ~ N(0.0, 1.0).
-   */
-  @Experimental
-  def normalVectorRDD(sc: SparkContext, numRows: Long, numCols: Int): RDD[Vector] = {
-    normalVectorRDD(sc, numRows, numCols, sc.defaultParallelism, Utils.random.nextLong)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
-   * Poisson distribution with the input mean.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param mean Mean, or lambda, for the Poisson distribution.
-   * @param numRows Number of Vectors in the RDD.
-   * @param numCols Number of elements in each Vector.
-   * @param numPartitions Number of partitions in the RDD.
-   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
-   * @return RDD[Vector] with vectors containing i.i.d. samples ~ Pois(mean).
-   */
-  @Experimental
-  def poissonVectorRDD(sc: SparkContext,
-      mean: Double,
-      numRows: Long,
-      numCols: Int,
-      numPartitions: Int,
-      seed: Long): RDD[Vector] = {
-    val poisson = new PoissonGenerator(mean)
-    randomVectorRDD(sc, poisson, numRows, numCols, numPartitions, seed)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
-   * Poisson distribution with the input mean.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param mean Mean, or lambda, for the Poisson distribution.
-   * @param numRows Number of Vectors in the RDD.
-   * @param numCols Number of elements in each Vector.
-   * @param numPartitions Number of partitions in the RDD.
-   * @return RDD[Vector] with vectors containing i.i.d. samples ~ Pois(mean).
-   */
-  @Experimental
-  def poissonVectorRDD(sc: SparkContext,
-      mean: Double,
-      numRows: Long,
-      numCols: Int,
-      numPartitions: Int): RDD[Vector] = {
-    poissonVectorRDD(sc, mean, numRows, numCols, numPartitions, Utils.random.nextLong)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
-   * Poisson distribution with the input mean.
-   * sc.defaultParallelism used for the number of partitions in the RDD.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param mean Mean, or lambda, for the Poisson distribution.
-   * @param numRows Number of Vectors in the RDD.
-   * @param numCols Number of elements in each Vector.
-   * @return RDD[Vector] with vectors containing i.i.d. samples ~ Pois(mean).
-   */
-  @Experimental
-  def poissonVectorRDD(sc: SparkContext,
-      mean: Double,
-      numRows: Long,
-      numCols: Int): RDD[Vector] = {
-    poissonVectorRDD(sc, mean, numRows, numCols, sc.defaultParallelism, Utils.random.nextLong)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD[Vector] with vectors containing i.i.d. samples produced by the
-   * input DistributionGenerator.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param generator DistributionGenerator used to populate the RDD.
-   * @param numRows Number of Vectors in the RDD.
-   * @param numCols Number of elements in each Vector.
-   * @param numPartitions Number of partitions in the RDD.
-   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
-   * @return RDD[Vector] with vectors containing i.i.d. samples produced by generator.
-   */
-  @Experimental
-  def randomVectorRDD(sc: SparkContext,
-      generator: RandomDataGenerator[Double],
-      numRows: Long,
-      numCols: Int,
-      numPartitions: Int,
-      seed: Long): RDD[Vector] = {
-    new RandomVectorRDD(sc, numRows, numCols, numPartitions, generator, seed)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD[Vector] with vectors containing i.i.d. samples produced by the
-   * input DistributionGenerator.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param generator DistributionGenerator used to populate the RDD.
-   * @param numRows Number of Vectors in the RDD.
-   * @param numCols Number of elements in each Vector.
-   * @param numPartitions Number of partitions in the RDD.
-   * @return RDD[Vector] with vectors containing i.i.d. samples produced by generator.
-   */
-  @Experimental
-  def randomVectorRDD(sc: SparkContext,
-      generator: RandomDataGenerator[Double],
-      numRows: Long,
-      numCols: Int,
-      numPartitions: Int): RDD[Vector] = {
-    randomVectorRDD(sc, generator, numRows, numCols, numPartitions, Utils.random.nextLong)
-  }
-
-  /**
-   * :: Experimental ::
-   * Generates an RDD[Vector] with vectors containing i.i.d. samples produced by the
-   * input DistributionGenerator.
-   * sc.defaultParallelism used for the number of partitions in the RDD.
-   *
-   * @param sc SparkContext used to create the RDD.
-   * @param generator DistributionGenerator used to populate the RDD.
-   * @param numRows Number of Vectors in the RDD.
-   * @param numCols Number of elements in each Vector.
-   * @return RDD[Vector] with vectors containing i.i.d. samples produced by generator.
-   */
-  @Experimental
-  def randomVectorRDD(sc: SparkContext,
-      generator: RandomDataGenerator[Double],
-      numRows: Long,
-      numCols: Int): RDD[Vector] = {
-    randomVectorRDD(sc, generator, numRows, numCols,
-      sc.defaultParallelism, Utils.random.nextLong)
-  }
-}

http://git-wip-us.apache.org/repos/asf/spark/blob/ac6411c6/mllib/src/main/scala/org/apache/spark/mllib/random/RandomRDDs.scala
----------------------------------------------------------------------
diff --git a/mllib/src/main/scala/org/apache/spark/mllib/random/RandomRDDs.scala b/mllib/src/main/scala/org/apache/spark/mllib/random/RandomRDDs.scala
new file mode 100644
index 0000000..3627036
--- /dev/null
+++ b/mllib/src/main/scala/org/apache/spark/mllib/random/RandomRDDs.scala
@@ -0,0 +1,493 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.mllib.random
+
+import scala.reflect.ClassTag
+
+import org.apache.spark.SparkContext
+import org.apache.spark.annotation.Experimental
+import org.apache.spark.mllib.linalg.Vector
+import org.apache.spark.mllib.rdd.{RandomVectorRDD, RandomRDD}
+import org.apache.spark.rdd.RDD
+import org.apache.spark.util.Utils
+
+/**
+ * :: Experimental ::
+ * Generator methods for creating RDDs comprised of i.i.d. samples from some distribution.
+ */
+@Experimental
+object RandomRDDs {
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD comprised of i.i.d. samples from the uniform distribution on [0.0, 1.0].
+   *
+   * To transform the distribution in the generated RDD from U[0.0, 1.0] to U[a, b], use
+   * `RandomRDDGenerators.uniformRDD(sc, n, p, seed).map(v => a + (b - a) * v)`.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param size Size of the RDD.
+   * @param numPartitions Number of partitions in the RDD.
+   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
+   * @return RDD[Double] comprised of i.i.d. samples ~ U[0.0, 1.0].
+   */
+  @Experimental
+  def uniformRDD(sc: SparkContext, size: Long, numPartitions: Int, seed: Long): RDD[Double] = {
+    val uniform = new UniformGenerator()
+    randomRDD(sc, uniform,  size, numPartitions, seed)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD comprised of i.i.d. samples from the uniform distribution on [0.0, 1.0].
+   *
+   * To transform the distribution in the generated RDD from U[0.0, 1.0] to U[a, b], use
+   * `RandomRDDGenerators.uniformRDD(sc, n, p).map(v => a + (b - a) * v)`.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param size Size of the RDD.
+   * @param numPartitions Number of partitions in the RDD.
+   * @return RDD[Double] comprised of i.i.d. samples ~ U[0.0, 1.0].
+   */
+  @Experimental
+  def uniformRDD(sc: SparkContext, size: Long, numPartitions: Int): RDD[Double] = {
+    uniformRDD(sc, size, numPartitions, Utils.random.nextLong)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD comprised of i.i.d. samples from the uniform distribution on [0.0, 1.0].
+   * sc.defaultParallelism used for the number of partitions in the RDD.
+   *
+   * To transform the distribution in the generated RDD from U[0.0, 1.0] to U[a, b], use
+   * `RandomRDDGenerators.uniformRDD(sc, n).map(v => a + (b - a) * v)`.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param size Size of the RDD.
+   * @return RDD[Double] comprised of i.i.d. samples ~ U[0.0, 1.0].
+   */
+  @Experimental
+  def uniformRDD(sc: SparkContext, size: Long): RDD[Double] = {
+    uniformRDD(sc, size, sc.defaultParallelism, Utils.random.nextLong)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD comprised of i.i.d. samples from the standard normal distribution.
+   *
+   * To transform the distribution in the generated RDD from standard normal to some other normal
+   * N(mean, sigma), use `RandomRDDGenerators.normalRDD(sc, n, p, seed).map(v => mean + sigma * v)`.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param size Size of the RDD.
+   * @param numPartitions Number of partitions in the RDD.
+   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
+   * @return RDD[Double] comprised of i.i.d. samples ~ N(0.0, 1.0).
+   */
+  @Experimental
+  def normalRDD(sc: SparkContext, size: Long, numPartitions: Int, seed: Long): RDD[Double] = {
+    val normal = new StandardNormalGenerator()
+    randomRDD(sc, normal, size, numPartitions, seed)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD comprised of i.i.d. samples from the standard normal distribution.
+   *
+   * To transform the distribution in the generated RDD from standard normal to some other normal
+   * N(mean, sigma), use `RandomRDDGenerators.normalRDD(sc, n, p).map(v => mean + sigma * v)`.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param size Size of the RDD.
+   * @param numPartitions Number of partitions in the RDD.
+   * @return RDD[Double] comprised of i.i.d. samples ~ N(0.0, 1.0).
+   */
+  @Experimental
+  def normalRDD(sc: SparkContext, size: Long, numPartitions: Int): RDD[Double] = {
+    normalRDD(sc, size, numPartitions, Utils.random.nextLong)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD comprised of i.i.d. samples from the standard normal distribution.
+   * sc.defaultParallelism used for the number of partitions in the RDD.
+   *
+   * To transform the distribution in the generated RDD from standard normal to some other normal
+   * N(mean, sigma), use `RandomRDDGenerators.normalRDD(sc, n).map(v => mean + sigma * v)`.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param size Size of the RDD.
+   * @return RDD[Double] comprised of i.i.d. samples ~ N(0.0, 1.0).
+   */
+  @Experimental
+  def normalRDD(sc: SparkContext, size: Long): RDD[Double] = {
+    normalRDD(sc, size, sc.defaultParallelism, Utils.random.nextLong)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD comprised of i.i.d. samples from the Poisson distribution with the input mean.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param mean Mean, or lambda, for the Poisson distribution.
+   * @param size Size of the RDD.
+   * @param numPartitions Number of partitions in the RDD.
+   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
+   * @return RDD[Double] comprised of i.i.d. samples ~ Pois(mean).
+   */
+  @Experimental
+  def poissonRDD(sc: SparkContext,
+      mean: Double,
+      size: Long,
+      numPartitions: Int,
+      seed: Long): RDD[Double] = {
+    val poisson = new PoissonGenerator(mean)
+    randomRDD(sc, poisson, size, numPartitions, seed)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD comprised of i.i.d. samples from the Poisson distribution with the input mean.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param mean Mean, or lambda, for the Poisson distribution.
+   * @param size Size of the RDD.
+   * @param numPartitions Number of partitions in the RDD.
+   * @return RDD[Double] comprised of i.i.d. samples ~ Pois(mean).
+   */
+  @Experimental
+  def poissonRDD(sc: SparkContext, mean: Double, size: Long, numPartitions: Int): RDD[Double] = {
+    poissonRDD(sc, mean, size, numPartitions, Utils.random.nextLong)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD comprised of i.i.d. samples from the Poisson distribution with the input mean.
+   * sc.defaultParallelism used for the number of partitions in the RDD.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param mean Mean, or lambda, for the Poisson distribution.
+   * @param size Size of the RDD.
+   * @return RDD[Double] comprised of i.i.d. samples ~ Pois(mean).
+   */
+  @Experimental
+  def poissonRDD(sc: SparkContext, mean: Double, size: Long): RDD[Double] = {
+    poissonRDD(sc, mean, size, sc.defaultParallelism, Utils.random.nextLong)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD comprised of i.i.d. samples produced by the input DistributionGenerator.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param generator DistributionGenerator used to populate the RDD.
+   * @param size Size of the RDD.
+   * @param numPartitions Number of partitions in the RDD.
+   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
+   * @return RDD[Double] comprised of i.i.d. samples produced by generator.
+   */
+  @Experimental
+  def randomRDD[T: ClassTag](sc: SparkContext,
+      generator: RandomDataGenerator[T],
+      size: Long,
+      numPartitions: Int,
+      seed: Long): RDD[T] = {
+    new RandomRDD[T](sc, size, numPartitions, generator, seed)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD comprised of i.i.d. samples produced by the input DistributionGenerator.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param generator DistributionGenerator used to populate the RDD.
+   * @param size Size of the RDD.
+   * @param numPartitions Number of partitions in the RDD.
+   * @return RDD[Double] comprised of i.i.d. samples produced by generator.
+   */
+  @Experimental
+  def randomRDD[T: ClassTag](sc: SparkContext,
+      generator: RandomDataGenerator[T],
+      size: Long,
+      numPartitions: Int): RDD[T] = {
+    randomRDD[T](sc, generator, size, numPartitions, Utils.random.nextLong)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD comprised of i.i.d. samples produced by the input DistributionGenerator.
+   * sc.defaultParallelism used for the number of partitions in the RDD.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param generator DistributionGenerator used to populate the RDD.
+   * @param size Size of the RDD.
+   * @return RDD[Double] comprised of i.i.d. samples produced by generator.
+   */
+  @Experimental
+  def randomRDD[T: ClassTag](sc: SparkContext,
+      generator: RandomDataGenerator[T],
+      size: Long): RDD[T] = {
+    randomRDD[T](sc, generator, size, sc.defaultParallelism, Utils.random.nextLong)
+  }
+
+  // TODO Generate RDD[Vector] from multivariate distributions.
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
+   * uniform distribution on [0.0 1.0].
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param numRows Number of Vectors in the RDD.
+   * @param numCols Number of elements in each Vector.
+   * @param numPartitions Number of partitions in the RDD.
+   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
+   * @return RDD[Vector] with vectors containing i.i.d samples ~ U[0.0, 1.0].
+   */
+  @Experimental
+  def uniformVectorRDD(sc: SparkContext,
+      numRows: Long,
+      numCols: Int,
+      numPartitions: Int,
+      seed: Long): RDD[Vector] = {
+    val uniform = new UniformGenerator()
+    randomVectorRDD(sc, uniform, numRows, numCols, numPartitions, seed)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
+   * uniform distribution on [0.0 1.0].
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param numRows Number of Vectors in the RDD.
+   * @param numCols Number of elements in each Vector.
+   * @param numPartitions Number of partitions in the RDD.
+   * @return RDD[Vector] with vectors containing i.i.d. samples ~ U[0.0, 1.0].
+   */
+  @Experimental
+  def uniformVectorRDD(sc: SparkContext,
+      numRows: Long,
+      numCols: Int,
+      numPartitions: Int): RDD[Vector] = {
+    uniformVectorRDD(sc, numRows, numCols, numPartitions, Utils.random.nextLong)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
+   * uniform distribution on [0.0 1.0].
+   * sc.defaultParallelism used for the number of partitions in the RDD.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param numRows Number of Vectors in the RDD.
+   * @param numCols Number of elements in each Vector.
+   * @return RDD[Vector] with vectors containing i.i.d. samples ~ U[0.0, 1.0].
+   */
+  @Experimental
+  def uniformVectorRDD(sc: SparkContext, numRows: Long, numCols: Int): RDD[Vector] = {
+    uniformVectorRDD(sc, numRows, numCols, sc.defaultParallelism, Utils.random.nextLong)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
+   * standard normal distribution.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param numRows Number of Vectors in the RDD.
+   * @param numCols Number of elements in each Vector.
+   * @param numPartitions Number of partitions in the RDD.
+   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
+   * @return RDD[Vector] with vectors containing i.i.d. samples ~ N(0.0, 1.0).
+   */
+  @Experimental
+  def normalVectorRDD(sc: SparkContext,
+      numRows: Long,
+      numCols: Int,
+      numPartitions: Int,
+      seed: Long): RDD[Vector] = {
+    val uniform = new StandardNormalGenerator()
+    randomVectorRDD(sc, uniform, numRows, numCols, numPartitions, seed)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
+   * standard normal distribution.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param numRows Number of Vectors in the RDD.
+   * @param numCols Number of elements in each Vector.
+   * @param numPartitions Number of partitions in the RDD.
+   * @return RDD[Vector] with vectors containing i.i.d. samples ~ N(0.0, 1.0).
+   */
+  @Experimental
+  def normalVectorRDD(sc: SparkContext,
+      numRows: Long,
+      numCols: Int,
+      numPartitions: Int): RDD[Vector] = {
+    normalVectorRDD(sc, numRows, numCols, numPartitions, Utils.random.nextLong)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
+   * standard normal distribution.
+   * sc.defaultParallelism used for the number of partitions in the RDD.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param numRows Number of Vectors in the RDD.
+   * @param numCols Number of elements in each Vector.
+   * @return RDD[Vector] with vectors containing i.i.d. samples ~ N(0.0, 1.0).
+   */
+  @Experimental
+  def normalVectorRDD(sc: SparkContext, numRows: Long, numCols: Int): RDD[Vector] = {
+    normalVectorRDD(sc, numRows, numCols, sc.defaultParallelism, Utils.random.nextLong)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
+   * Poisson distribution with the input mean.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param mean Mean, or lambda, for the Poisson distribution.
+   * @param numRows Number of Vectors in the RDD.
+   * @param numCols Number of elements in each Vector.
+   * @param numPartitions Number of partitions in the RDD.
+   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
+   * @return RDD[Vector] with vectors containing i.i.d. samples ~ Pois(mean).
+   */
+  @Experimental
+  def poissonVectorRDD(sc: SparkContext,
+      mean: Double,
+      numRows: Long,
+      numCols: Int,
+      numPartitions: Int,
+      seed: Long): RDD[Vector] = {
+    val poisson = new PoissonGenerator(mean)
+    randomVectorRDD(sc, poisson, numRows, numCols, numPartitions, seed)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
+   * Poisson distribution with the input mean.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param mean Mean, or lambda, for the Poisson distribution.
+   * @param numRows Number of Vectors in the RDD.
+   * @param numCols Number of elements in each Vector.
+   * @param numPartitions Number of partitions in the RDD.
+   * @return RDD[Vector] with vectors containing i.i.d. samples ~ Pois(mean).
+   */
+  @Experimental
+  def poissonVectorRDD(sc: SparkContext,
+      mean: Double,
+      numRows: Long,
+      numCols: Int,
+      numPartitions: Int): RDD[Vector] = {
+    poissonVectorRDD(sc, mean, numRows, numCols, numPartitions, Utils.random.nextLong)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD[Vector] with vectors containing i.i.d. samples drawn from the
+   * Poisson distribution with the input mean.
+   * sc.defaultParallelism used for the number of partitions in the RDD.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param mean Mean, or lambda, for the Poisson distribution.
+   * @param numRows Number of Vectors in the RDD.
+   * @param numCols Number of elements in each Vector.
+   * @return RDD[Vector] with vectors containing i.i.d. samples ~ Pois(mean).
+   */
+  @Experimental
+  def poissonVectorRDD(sc: SparkContext,
+      mean: Double,
+      numRows: Long,
+      numCols: Int): RDD[Vector] = {
+    poissonVectorRDD(sc, mean, numRows, numCols, sc.defaultParallelism, Utils.random.nextLong)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD[Vector] with vectors containing i.i.d. samples produced by the
+   * input DistributionGenerator.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param generator DistributionGenerator used to populate the RDD.
+   * @param numRows Number of Vectors in the RDD.
+   * @param numCols Number of elements in each Vector.
+   * @param numPartitions Number of partitions in the RDD.
+   * @param seed Seed for the RNG that generates the seed for the generator in each partition.
+   * @return RDD[Vector] with vectors containing i.i.d. samples produced by generator.
+   */
+  @Experimental
+  def randomVectorRDD(sc: SparkContext,
+      generator: RandomDataGenerator[Double],
+      numRows: Long,
+      numCols: Int,
+      numPartitions: Int,
+      seed: Long): RDD[Vector] = {
+    new RandomVectorRDD(sc, numRows, numCols, numPartitions, generator, seed)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD[Vector] with vectors containing i.i.d. samples produced by the
+   * input DistributionGenerator.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param generator DistributionGenerator used to populate the RDD.
+   * @param numRows Number of Vectors in the RDD.
+   * @param numCols Number of elements in each Vector.
+   * @param numPartitions Number of partitions in the RDD.
+   * @return RDD[Vector] with vectors containing i.i.d. samples produced by generator.
+   */
+  @Experimental
+  def randomVectorRDD(sc: SparkContext,
+      generator: RandomDataGenerator[Double],
+      numRows: Long,
+      numCols: Int,
+      numPartitions: Int): RDD[Vector] = {
+    randomVectorRDD(sc, generator, numRows, numCols, numPartitions, Utils.random.nextLong)
+  }
+
+  /**
+   * :: Experimental ::
+   * Generates an RDD[Vector] with vectors containing i.i.d. samples produced by the
+   * input DistributionGenerator.
+   * sc.defaultParallelism used for the number of partitions in the RDD.
+   *
+   * @param sc SparkContext used to create the RDD.
+   * @param generator DistributionGenerator used to populate the RDD.
+   * @param numRows Number of Vectors in the RDD.
+   * @param numCols Number of elements in each Vector.
+   * @return RDD[Vector] with vectors containing i.i.d. samples produced by generator.
+   */
+  @Experimental
+  def randomVectorRDD(sc: SparkContext,
+      generator: RandomDataGenerator[Double],
+      numRows: Long,
+      numCols: Int): RDD[Vector] = {
+    randomVectorRDD(sc, generator, numRows, numCols,
+      sc.defaultParallelism, Utils.random.nextLong)
+  }
+}

http://git-wip-us.apache.org/repos/asf/spark/blob/ac6411c6/mllib/src/test/scala/org/apache/spark/mllib/random/RandomRDDGeneratorsSuite.scala
----------------------------------------------------------------------
diff --git a/mllib/src/test/scala/org/apache/spark/mllib/random/RandomRDDGeneratorsSuite.scala b/mllib/src/test/scala/org/apache/spark/mllib/random/RandomRDDGeneratorsSuite.scala
deleted file mode 100644
index 96e0bc6..0000000
--- a/mllib/src/test/scala/org/apache/spark/mllib/random/RandomRDDGeneratorsSuite.scala
+++ /dev/null
@@ -1,160 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *    http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.spark.mllib.random
-
-import scala.collection.mutable.ArrayBuffer
-
-import org.scalatest.FunSuite
-
-import org.apache.spark.SparkContext._
-import org.apache.spark.mllib.linalg.Vector
-import org.apache.spark.mllib.rdd.{RandomRDDPartition, RandomRDD}
-import org.apache.spark.mllib.util.LocalSparkContext
-import org.apache.spark.rdd.RDD
-import org.apache.spark.util.StatCounter
-
-/*
- * Note: avoid including APIs that do not set the seed for the RNG in unit tests
- * in order to guarantee deterministic behavior.
- *
- * TODO update tests to use TestingUtils for floating point comparison after PR 1367 is merged
- */
-class RandomRDDGeneratorsSuite extends FunSuite with LocalSparkContext with Serializable {
-
-  def testGeneratedRDD(rdd: RDD[Double],
-      expectedSize: Long,
-      expectedNumPartitions: Int,
-      expectedMean: Double,
-      expectedStddev: Double,
-      epsilon: Double = 0.01) {
-    val stats = rdd.stats()
-    assert(expectedSize === stats.count)
-    assert(expectedNumPartitions === rdd.partitions.size)
-    assert(math.abs(stats.mean - expectedMean) < epsilon)
-    assert(math.abs(stats.stdev - expectedStddev) < epsilon)
-  }
-
-  // assume test RDDs are small
-  def testGeneratedVectorRDD(rdd: RDD[Vector],
-      expectedRows: Long,
-      expectedColumns: Int,
-      expectedNumPartitions: Int,
-      expectedMean: Double,
-      expectedStddev: Double,
-      epsilon: Double = 0.01) {
-    assert(expectedNumPartitions === rdd.partitions.size)
-    val values = new ArrayBuffer[Double]()
-    rdd.collect.foreach { vector => {
-      assert(vector.size === expectedColumns)
-      values ++= vector.toArray
-    }}
-    assert(expectedRows === values.size / expectedColumns)
-    val stats = new StatCounter(values)
-    assert(math.abs(stats.mean - expectedMean) < epsilon)
-    assert(math.abs(stats.stdev - expectedStddev) < epsilon)
-  }
-
-  test("RandomRDD sizes") {
-
-    // some cases where size % numParts != 0 to test getPartitions behaves correctly
-    for ((size, numPartitions) <- List((10000, 6), (12345, 1), (1000, 101))) {
-      val rdd = new RandomRDD(sc, size, numPartitions, new UniformGenerator, 0L)
-      assert(rdd.count() === size)
-      assert(rdd.partitions.size === numPartitions)
-
-      // check that partition sizes are balanced
-      val partSizes = rdd.partitions.map(p =>
-        p.asInstanceOf[RandomRDDPartition[Double]].size.toDouble)
-
-      val partStats = new StatCounter(partSizes)
-      assert(partStats.max - partStats.min <= 1)
-    }
-
-    // size > Int.MaxValue
-    val size = Int.MaxValue.toLong * 100L
-    val numPartitions = 101
-    val rdd = new RandomRDD(sc, size, numPartitions, new UniformGenerator, 0L)
-    assert(rdd.partitions.size === numPartitions)
-    val count = rdd.partitions.foldLeft(0L) { (count, part) =>
-      count + part.asInstanceOf[RandomRDDPartition[Double]].size
-    }
-    assert(count === size)
-
-    // size needs to be positive
-    intercept[IllegalArgumentException] { new RandomRDD(sc, 0, 10, new UniformGenerator, 0L) }
-
-    // numPartitions needs to be positive
-    intercept[IllegalArgumentException] { new RandomRDD(sc, 100, 0, new UniformGenerator, 0L) }
-
-    // partition size needs to be <= Int.MaxValue
-    intercept[IllegalArgumentException] {
-      new RandomRDD(sc, Int.MaxValue.toLong * 100L, 99, new UniformGenerator, 0L)
-    }
-  }
-
-  test("randomRDD for different distributions") {
-    val size = 100000L
-    val numPartitions = 10
-    val poissonMean = 100.0
-
-    for (seed <- 0 until 5) {
-      val uniform = RandomRDDGenerators.uniformRDD(sc, size, numPartitions, seed)
-      testGeneratedRDD(uniform, size, numPartitions, 0.5, 1 / math.sqrt(12))
-
-      val normal = RandomRDDGenerators.normalRDD(sc, size, numPartitions, seed)
-      testGeneratedRDD(normal, size, numPartitions, 0.0, 1.0)
-
-      val poisson = RandomRDDGenerators.poissonRDD(sc, poissonMean, size, numPartitions, seed)
-      testGeneratedRDD(poisson, size, numPartitions, poissonMean, math.sqrt(poissonMean), 0.1)
-    }
-
-    // mock distribution to check that partitions have unique seeds
-    val random = RandomRDDGenerators.randomRDD(sc, new MockDistro(), 1000L, 1000, 0L)
-    assert(random.collect.size === random.collect.distinct.size)
-  }
-
-  test("randomVectorRDD for different distributions") {
-    val rows = 1000L
-    val cols = 100
-    val parts = 10
-    val poissonMean = 100.0
-
-    for (seed <- 0 until 5) {
-      val uniform = RandomRDDGenerators.uniformVectorRDD(sc, rows, cols, parts, seed)
-      testGeneratedVectorRDD(uniform, rows, cols, parts, 0.5, 1 / math.sqrt(12))
-
-      val normal = RandomRDDGenerators.normalVectorRDD(sc, rows, cols, parts, seed)
-      testGeneratedVectorRDD(normal, rows, cols, parts, 0.0, 1.0)
-
-      val poisson = RandomRDDGenerators.poissonVectorRDD(sc, poissonMean, rows, cols, parts, seed)
-      testGeneratedVectorRDD(poisson, rows, cols, parts, poissonMean, math.sqrt(poissonMean), 0.1)
-    }
-  }
-}
-
-private[random] class MockDistro extends RandomDataGenerator[Double] {
-
-  var seed = 0L
-
-  // This allows us to check that each partition has a different seed
-  override def nextValue(): Double = seed.toDouble
-
-  override def setSeed(seed: Long) = this.seed = seed
-
-  override def copy(): MockDistro = new MockDistro
-}

http://git-wip-us.apache.org/repos/asf/spark/blob/ac6411c6/mllib/src/test/scala/org/apache/spark/mllib/random/RandomRDDsSuite.scala
----------------------------------------------------------------------
diff --git a/mllib/src/test/scala/org/apache/spark/mllib/random/RandomRDDsSuite.scala b/mllib/src/test/scala/org/apache/spark/mllib/random/RandomRDDsSuite.scala
new file mode 100644
index 0000000..c50b78b
--- /dev/null
+++ b/mllib/src/test/scala/org/apache/spark/mllib/random/RandomRDDsSuite.scala
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.mllib.random
+
+import scala.collection.mutable.ArrayBuffer
+
+import org.scalatest.FunSuite
+
+import org.apache.spark.SparkContext._
+import org.apache.spark.mllib.linalg.Vector
+import org.apache.spark.mllib.rdd.{RandomRDDPartition, RandomRDD}
+import org.apache.spark.mllib.util.LocalSparkContext
+import org.apache.spark.rdd.RDD
+import org.apache.spark.util.StatCounter
+
+/*
+ * Note: avoid including APIs that do not set the seed for the RNG in unit tests
+ * in order to guarantee deterministic behavior.
+ *
+ * TODO update tests to use TestingUtils for floating point comparison after PR 1367 is merged
+ */
+class RandomRDDsSuite extends FunSuite with LocalSparkContext with Serializable {
+
+  def testGeneratedRDD(rdd: RDD[Double],
+      expectedSize: Long,
+      expectedNumPartitions: Int,
+      expectedMean: Double,
+      expectedStddev: Double,
+      epsilon: Double = 0.01) {
+    val stats = rdd.stats()
+    assert(expectedSize === stats.count)
+    assert(expectedNumPartitions === rdd.partitions.size)
+    assert(math.abs(stats.mean - expectedMean) < epsilon)
+    assert(math.abs(stats.stdev - expectedStddev) < epsilon)
+  }
+
+  // assume test RDDs are small
+  def testGeneratedVectorRDD(rdd: RDD[Vector],
+      expectedRows: Long,
+      expectedColumns: Int,
+      expectedNumPartitions: Int,
+      expectedMean: Double,
+      expectedStddev: Double,
+      epsilon: Double = 0.01) {
+    assert(expectedNumPartitions === rdd.partitions.size)
+    val values = new ArrayBuffer[Double]()
+    rdd.collect.foreach { vector => {
+      assert(vector.size === expectedColumns)
+      values ++= vector.toArray
+    }}
+    assert(expectedRows === values.size / expectedColumns)
+    val stats = new StatCounter(values)
+    assert(math.abs(stats.mean - expectedMean) < epsilon)
+    assert(math.abs(stats.stdev - expectedStddev) < epsilon)
+  }
+
+  test("RandomRDD sizes") {
+
+    // some cases where size % numParts != 0 to test getPartitions behaves correctly
+    for ((size, numPartitions) <- List((10000, 6), (12345, 1), (1000, 101))) {
+      val rdd = new RandomRDD(sc, size, numPartitions, new UniformGenerator, 0L)
+      assert(rdd.count() === size)
+      assert(rdd.partitions.size === numPartitions)
+
+      // check that partition sizes are balanced
+      val partSizes = rdd.partitions.map(p =>
+        p.asInstanceOf[RandomRDDPartition[Double]].size.toDouble)
+
+      val partStats = new StatCounter(partSizes)
+      assert(partStats.max - partStats.min <= 1)
+    }
+
+    // size > Int.MaxValue
+    val size = Int.MaxValue.toLong * 100L
+    val numPartitions = 101
+    val rdd = new RandomRDD(sc, size, numPartitions, new UniformGenerator, 0L)
+    assert(rdd.partitions.size === numPartitions)
+    val count = rdd.partitions.foldLeft(0L) { (count, part) =>
+      count + part.asInstanceOf[RandomRDDPartition[Double]].size
+    }
+    assert(count === size)
+
+    // size needs to be positive
+    intercept[IllegalArgumentException] { new RandomRDD(sc, 0, 10, new UniformGenerator, 0L) }
+
+    // numPartitions needs to be positive
+    intercept[IllegalArgumentException] { new RandomRDD(sc, 100, 0, new UniformGenerator, 0L) }
+
+    // partition size needs to be <= Int.MaxValue
+    intercept[IllegalArgumentException] {
+      new RandomRDD(sc, Int.MaxValue.toLong * 100L, 99, new UniformGenerator, 0L)
+    }
+  }
+
+  test("randomRDD for different distributions") {
+    val size = 100000L
+    val numPartitions = 10
+    val poissonMean = 100.0
+
+    for (seed <- 0 until 5) {
+      val uniform = RandomRDDs.uniformRDD(sc, size, numPartitions, seed)
+      testGeneratedRDD(uniform, size, numPartitions, 0.5, 1 / math.sqrt(12))
+
+      val normal = RandomRDDs.normalRDD(sc, size, numPartitions, seed)
+      testGeneratedRDD(normal, size, numPartitions, 0.0, 1.0)
+
+      val poisson = RandomRDDs.poissonRDD(sc, poissonMean, size, numPartitions, seed)
+      testGeneratedRDD(poisson, size, numPartitions, poissonMean, math.sqrt(poissonMean), 0.1)
+    }
+
+    // mock distribution to check that partitions have unique seeds
+    val random = RandomRDDs.randomRDD(sc, new MockDistro(), 1000L, 1000, 0L)
+    assert(random.collect.size === random.collect.distinct.size)
+  }
+
+  test("randomVectorRDD for different distributions") {
+    val rows = 1000L
+    val cols = 100
+    val parts = 10
+    val poissonMean = 100.0
+
+    for (seed <- 0 until 5) {
+      val uniform = RandomRDDs.uniformVectorRDD(sc, rows, cols, parts, seed)
+      testGeneratedVectorRDD(uniform, rows, cols, parts, 0.5, 1 / math.sqrt(12))
+
+      val normal = RandomRDDs.normalVectorRDD(sc, rows, cols, parts, seed)
+      testGeneratedVectorRDD(normal, rows, cols, parts, 0.0, 1.0)
+
+      val poisson = RandomRDDs.poissonVectorRDD(sc, poissonMean, rows, cols, parts, seed)
+      testGeneratedVectorRDD(poisson, rows, cols, parts, poissonMean, math.sqrt(poissonMean), 0.1)
+    }
+  }
+}
+
+private[random] class MockDistro extends RandomDataGenerator[Double] {
+
+  var seed = 0L
+
+  // This allows us to check that each partition has a different seed
+  override def nextValue(): Double = seed.toDouble
+
+  override def setSeed(seed: Long) = this.seed = seed
+
+  override def copy(): MockDistro = new MockDistro
+}

http://git-wip-us.apache.org/repos/asf/spark/blob/ac6411c6/python/pyspark/mllib/random.py
----------------------------------------------------------------------
diff --git a/python/pyspark/mllib/random.py b/python/pyspark/mllib/random.py
index eb49668..3f3b190 100644
--- a/python/pyspark/mllib/random.py
+++ b/python/pyspark/mllib/random.py
@@ -25,8 +25,7 @@ from pyspark.mllib._common import _deserialize_double, _deserialize_double_vecto
 from pyspark.serializers import NoOpSerializer
 
 
-class RandomRDDGenerators:
-
+class RandomRDDs:
     """
     Generator methods for creating RDDs comprised of i.i.d samples from
     some distribution.
@@ -40,17 +39,17 @@ class RandomRDDGenerators:
 
         To transform the distribution in the generated RDD from U[0.0, 1.0]
         to U[a, b], use
-        C{RandomRDDGenerators.uniformRDD(sc, n, p, seed)\
+        C{RandomRDDs.uniformRDD(sc, n, p, seed)\
           .map(lambda v: a + (b - a) * v)}
 
-        >>> x = RandomRDDGenerators.uniformRDD(sc, 100).collect()
+        >>> x = RandomRDDs.uniformRDD(sc, 100).collect()
         >>> len(x)
         100
         >>> max(x) <= 1.0 and min(x) >= 0.0
         True
-        >>> RandomRDDGenerators.uniformRDD(sc, 100, 4).getNumPartitions()
+        >>> RandomRDDs.uniformRDD(sc, 100, 4).getNumPartitions()
         4
-        >>> parts = RandomRDDGenerators.uniformRDD(sc, 100, seed=4).getNumPartitions()
+        >>> parts = RandomRDDs.uniformRDD(sc, 100, seed=4).getNumPartitions()
         >>> parts == sc.defaultParallelism
         True
         """
@@ -66,10 +65,10 @@ class RandomRDDGenerators:
 
         To transform the distribution in the generated RDD from standard normal
         to some other normal N(mean, sigma), use
-        C{RandomRDDGenerators.normal(sc, n, p, seed)\
+        C{RandomRDDs.normal(sc, n, p, seed)\
           .map(lambda v: mean + sigma * v)}
 
-        >>> x = RandomRDDGenerators.normalRDD(sc, 1000, seed=1L)
+        >>> x = RandomRDDs.normalRDD(sc, 1000, seed=1L)
         >>> stats = x.stats()
         >>> stats.count()
         1000L
@@ -89,7 +88,7 @@ class RandomRDDGenerators:
         distribution with the input mean.
 
         >>> mean = 100.0
-        >>> x = RandomRDDGenerators.poissonRDD(sc, mean, 1000, seed=1L)
+        >>> x = RandomRDDs.poissonRDD(sc, mean, 1000, seed=1L)
         >>> stats = x.stats()
         >>> stats.count()
         1000L
@@ -110,12 +109,12 @@ class RandomRDDGenerators:
         from the uniform distribution on [0.0 1.0].
 
         >>> import numpy as np
-        >>> mat = np.matrix(RandomRDDGenerators.uniformVectorRDD(sc, 10, 10).collect())
+        >>> mat = np.matrix(RandomRDDs.uniformVectorRDD(sc, 10, 10).collect())
         >>> mat.shape
         (10, 10)
         >>> mat.max() <= 1.0 and mat.min() >= 0.0
         True
-        >>> RandomRDDGenerators.uniformVectorRDD(sc, 10, 10, 4).getNumPartitions()
+        >>> RandomRDDs.uniformVectorRDD(sc, 10, 10, 4).getNumPartitions()
         4
         """
         jrdd = sc._jvm.PythonMLLibAPI() \
@@ -130,7 +129,7 @@ class RandomRDDGenerators:
         from the standard normal distribution.
 
         >>> import numpy as np
-        >>> mat = np.matrix(RandomRDDGenerators.normalVectorRDD(sc, 100, 100, seed=1L).collect())
+        >>> mat = np.matrix(RandomRDDs.normalVectorRDD(sc, 100, 100, seed=1L).collect())
         >>> mat.shape
         (100, 100)
         >>> abs(mat.mean() - 0.0) < 0.1
@@ -151,7 +150,7 @@ class RandomRDDGenerators:
 
         >>> import numpy as np
         >>> mean = 100.0
-        >>> rdd = RandomRDDGenerators.poissonVectorRDD(sc, mean, 100, 100, seed=1L)
+        >>> rdd = RandomRDDs.poissonVectorRDD(sc, mean, 100, 100, seed=1L)
         >>> mat = np.mat(rdd.collect())
         >>> mat.shape
         (100, 100)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


Mime
View raw message