spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jkbradley <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-11959] [SPARK-15484] [Doc] [ML] Documen...
Date Mon, 23 May 2016 17:05:21 GMT
Github user jkbradley commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13262#discussion_r64252070
  
    --- Diff: docs/ml-advanced.md ---
    @@ -4,10 +4,85 @@ title: Advanced topics - spark.ml
     displayTitle: Advanced topics - spark.ml
     ---
     
    -# Optimization of linear methods
    +* Table of contents
    +{:toc}
    +
    +`\[
    +\newcommand{\R}{\mathbb{R}}
    +\newcommand{\E}{\mathbb{E}} 
    +\newcommand{\x}{\mathbf{x}}
    +\newcommand{\y}{\mathbf{y}}
    +\newcommand{\wv}{\mathbf{w}}
    +\newcommand{\av}{\mathbf{\alpha}}
    +\newcommand{\bv}{\mathbf{b}}
    +\newcommand{\N}{\mathbb{N}}
    +\newcommand{\id}{\mathbf{I}} 
    +\newcommand{\ind}{\mathbf{1}} 
    +\newcommand{\0}{\mathbf{0}} 
    +\newcommand{\unit}{\mathbf{e}} 
    +\newcommand{\one}{\mathbf{1}} 
    +\newcommand{\zero}{\mathbf{0}}
    +\]`
    +
    +# Optimization of linear methods (developer)
    +
    +## Limited-memory BFGS (L-BFGS)
    +[L-BFGS](http://en.wikipedia.org/wiki/Limited-memory_BFGS) is an optimization 
    +algorithm in the family of quasi-Newton methods to solve the optimization problems of
the form 
    +`$\min_{\wv \in\R^d} \; f(\wv)$`. The L-BFGS method approximates the objective function
locally as a 
    +quadratic without evaluating the second partial derivatives of the objective function
to construct the 
    +Hessian matrix. The Hessian matrix is approximated by previous gradient evaluations,
so there is no 
    +vertical scalability issue (the number of training features) when computing the Hessian
matrix 
    +explicitly in Newton's method. As a result, L-BFGS often achieves faster convergence
compared with 
    +other first-order optimizations.
     
    -The optimization algorithm underlying the implementation is called
     [Orthant-Wise Limited-memory
     QuasiNewton](http://research-srv.microsoft.com/en-us/um/people/jfgao/paper/icml07scalable.pdf)
    -(OWL-QN). It is an extension of L-BFGS that can effectively handle L1
    -regularization and elastic net.
    +(OWL-QN) is an extension of L-BFGS that can effectively handle L1 regularization and
elastic net.
    +
    +L-BFGS is used as a solver for [LinearRegression](api/scala/index.html#org.apache.spark.ml.regression.LinearRegression),
    +[LogisticRegression](api/scala/index.html#org.apache.spark.ml.classification.LogisticRegression),
    +[AFTSurvivalRegression](api/scala/index.html#org.apache.spark.ml.regression.AFTSurvivalRegression)
    +and [MultilayerPerceptronClassifier](api/scala/index.html#org.apache.spark.ml.classification.MultilayerPerceptronClassifier).
    +
    +The `spark.ml` L-BFGS solver calls the corresponding implementation in [breeze](https://github.com/scalanlp/breeze/blob/master/math/src/main/scala/breeze/optimize/LBFGS.scala).
    +
    +## Normal equation solver for weighted least squares (normal)
    +
    +`spark.ml` implements normal equation solver for weighted least squares by [WeightedLeastSquares](https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/optim/WeightedLeastSquares.scala).
    +
    +Given $n$ weighted observations $(w_i, a_i, b_i)$:
    +
    +* $w_i$ the weight of i-th observation
    +* $a_i$ the features vector of i-th observation
    +* $b_i$ the label of i-th observation
    +
    +The number of features for each observation is $m$. We use the following weighted least
squares formulation:
    +`\[   
    +minimize_{x}\frac{1}{2} \sum_{i=1}^n \frac{w_i(a_i^T x -b_i)^2}{\sum_{i=1}^n w_i} + \frac{1}{2}\frac{\lambda}{\delta}\sum_{j=1}^m(\sigma_{j}
x_{j})^2
    +\]`
    +where $\lambda$ is the regularization parameter, $\delta$ is the population standard
deviation of label
    +and $\sigma_j$ is the population standard deviation of the j-th feature column.
    +
    +This objective function has an analytic solution and it requires only one pass over the
data to collect necessary statistics to solve.
    +Unlike the original dataset which can only be stored in distributed system,
    +these statistics can be easily loaded into memory on a single machine, and then we can
solve the objective function through Cholesky factorization on the driver.
    +
    +WeightedLeastSquares only supports L2 regularization and provides options to enable or
disable regularization, standardizing features and labels.
    +In order to take the normal equation approach efficiently, WeightedLeastSquares only
supports the number of features is no more than 4096.
    +
    +## Iteratively re-weighted least squares (IRLS)
    +
    +`spark.ml` implements iteratively reweighted least squares (IRLS) by [IterativelyReweightedLeastSquares](https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/optim/IterativelyReweightedLeastSquares.scala).
    +It can be used to find the maximum likelihood estimates of a generalized linear model
(GLM), find M-estimator in robust regression and other optimization problems.
    +Refer to [Iteratively Reweighted Least Squares for Maximum Likelihood Estimation, and
some Robust and Resistant Alternatives](http://www.jstor.org/stable/2345503) for more information.
    +
    +It solves certain optimization problems iteratively:
    +
    +* linearize the objective at current solution and update corresponding weight.
    +* solve a weighted least squares (WLS) problem by WeightedLeastSquares.
    +* repeat above steps until convergence.
    +
    +Due to it involves solving a weighted least squares (WLS) problem by WeightedLeastSquares
in each step of the iteration,
    --- End diff --
    
    "Due to it" --> "Since it"
    "each step of the iteration" --> "each iteration"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message