spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From dbtsai <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-2309][MLlib] Multinomial Logistic Regre...
Date Fri, 30 Jan 2015 04:45:36 GMT
Github user dbtsai commented on a diff in the pull request:

    https://github.com/apache/spark/pull/3833#discussion_r23823961
  
    --- Diff: mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala
---
    @@ -285,6 +377,97 @@ class LogisticRegressionSuite extends FunSuite with MLlibTestSparkContext
with M
         assert(modelB1.weights(0) !~== modelB3.weights(0) * 1.0E6 absTol 0.1)
       }
     
    +  test("multinomial logistic regression with LBFGS") {
    +    val nPoints = 10000
    +
    +    /**
    +     * The following weights and xMean/xVariance are computed from iris dataset with
lambda = 0.2.
    +     * As a result, we are actually drawing samples from probability distribution of
built model.
    +     */
    +    val weights = Array(
    +      -0.57997, 0.912083, -0.371077, -0.819866, 2.688191,
    +      -0.16624, -0.84355, -0.048509, -0.301789, 4.170682)
    +
    +    val xMean = Array(5.843, 3.057, 3.758, 1.199)
    +    val xVariance = Array(0.6856, 0.1899, 3.116, 0.581)
    +
    +    val testData = LogisticRegressionSuite.generateMultinomialLogisticInput(
    +      weights, xMean, xVariance, true, nPoints, 42)
    +
    +    val testRDD = sc.parallelize(testData, 2)
    +    testRDD.cache()
    +
    +    val lr = new LogisticRegressionWithLBFGS().setIntercept(true).setNumOfClasses(3)
    +    lr.optimizer.setConvergenceTol(1E-15).setNumIterations(200)
    +
    +    val model = lr.run(testRDD)
    +
    +    /**
    +     * The following is the instruction to reproduce the model using R's glmnet package.
    +     *
    +     * First of all, using the following scala code to save the data into `path`.
    +     *
    +     *    testRDD.map(x => x.label+ ", " + x.features(0) + ", " + x.features(1) +
", " +
    +     *      x.features(2) + ", " + x.features(3)).saveAsTextFile("path")
    +     *
    +     * Using the following R code to load the data and train the model using glmnet package.
    +     *
    +     *    library("glmnet")
    +     *    data <- read.csv("/Users/dbtsai/data.csv/a", header=FALSE)
    --- End diff --
    
    remove dbtsai


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message