spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sethah <>
Subject [GitHub] spark pull request #13796: [SPARK-7159][ML] Add multiclass logistic regressi...
Date Mon, 15 Aug 2016 18:09:03 GMT
Github user sethah commented on a diff in the pull request:
    --- Diff: mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala
    @@ -945,13 +955,139 @@ class BinaryLogisticRegressionSummary private[classification] (
     private class LogisticAggregator(
         private val numFeatures: Int,
         numClasses: Int,
    -    fitIntercept: Boolean) extends Serializable {
    +    fitIntercept: Boolean,
    +    multinomial: Boolean,
    +    standardize: Boolean) extends Serializable {
       private var weightSum = 0.0
       private var lossSum = 0.0
    -  private val gradientSumArray =
    -    Array.ofDim[Double](if (fitIntercept) numFeatures + 1 else numFeatures)
    +  private val totalCoefficientLength = {
    +    val cols = if (fitIntercept) numFeatures + 1 else numFeatures
    +    val rows = if (multinomial) numClasses else 1
    +    rows * cols
    +  }
    +  private val gradientSumArray = Array.ofDim[Double](totalCoefficientLength)
    +  /** Update gradient and loss using binary loss function. */
    +  private def binaryUpdateInPlace(
    +      features: Vector,
    +      weight: Double,
    +      label: Double,
    +      coefficients: Array[Double],
    +      gradient: Array[Double],
    +      featuresStd: Array[Double],
    +      numFeaturesPlusIntercept: Int,
    +      standardize: Boolean): Unit = {
    +    val margin = - {
    +      var sum = 0.0
    +      features.foreachActive { (index, value) =>
    +        if (featuresStd(index) != 0.0 && value != 0.0) {
    +          val x = if (standardize) value / featuresStd(index) else value
    --- End diff --
    That's a good point, the current code is confusing. The issue is that **standardizing
the features in every iteration is not efficient.**
    In the old `mllib` implementation, the feature standardization was implemented by transforming
the entire dataset once, _before_ optimization, and operating on that dataset. The results
were "unstandardized" at the end to make this transformation transparent. In the `ml` implementation
of BLOR, the standardization is performed by dividing each `x_ij` by its column standard deviation.
In every iteration, that is `numFeatures * numClasses * numPoints` extra scalar divisions.
I am not sure why it was done differently in `ml` so I might be missing an important design
discussion somewhere. Since binary log reg will still take the "standardize every iteration"
approach, but MLOR and BLOR will still call the same shared aggregator, I tried to make this
generic. It's true we can hardcode MLOR not to do it every iteration (so there would be no
`standardize` field). But then taking two separate approaches within the same aggregator without
making it explicitly clear seems confusing and uni
 ntuitive to me. My thoughts were that we could just remove this if we decide to change the
approach in BLOR (in a later PR) to match the one proposed here, but to leave it generic until
then. I appreciate thoughts on this.

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message