spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jkbrad...@apache.org
Subject spark git commit: [SPARK-10560][PYSPARK][MLLIB][DOCS] Make StreamingLogisticRegressionWithSGD Python API equal to Scala one
Date Tue, 24 Nov 2015 01:12:03 GMT
Repository: spark
Updated Branches:
  refs/heads/branch-1.6 696fc33b8 -> 159d631f5


[SPARK-10560][PYSPARK][MLLIB][DOCS] Make StreamingLogisticRegressionWithSGD Python API equal
to Scala one

This is to bring the API documentation of StreamingLogisticReressionWithSGD and StreamingLinearRegressionWithSGC
in line with the Scala versions.

-Fixed the algorithm descriptions
-Added default values to parameter descriptions
-Changed StreamingLogisticRegressionWithSGD regParam to default to 0, as in the Scala version

Author: Bryan Cutler <bjcutler@us.ibm.com>

Closes #9141 from BryanCutler/StreamingLogisticRegressionWithSGD-python-api-sync.

(cherry picked from commit 105745645b12afbbc2a350518cb5853a88944183)
Signed-off-by: Joseph K. Bradley <joseph@databricks.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/159d631f
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/159d631f
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/159d631f

Branch: refs/heads/branch-1.6
Commit: 159d631f5d0a0a7209e0a2831d774ecf0d2fa95f
Parents: 696fc33
Author: Bryan Cutler <bjcutler@us.ibm.com>
Authored: Mon Nov 23 17:11:51 2015 -0800
Committer: Joseph K. Bradley <joseph@databricks.com>
Committed: Mon Nov 23 17:11:59 2015 -0800

----------------------------------------------------------------------
 python/pyspark/mllib/classification.py | 37 +++++++++++++++++++----------
 python/pyspark/mllib/regression.py     | 32 ++++++++++++++++---------
 2 files changed, 46 insertions(+), 23 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/159d631f/python/pyspark/mllib/classification.py
----------------------------------------------------------------------
diff --git a/python/pyspark/mllib/classification.py b/python/pyspark/mllib/classification.py
index aab4015..9e6f17e 100644
--- a/python/pyspark/mllib/classification.py
+++ b/python/pyspark/mllib/classification.py
@@ -652,21 +652,34 @@ class NaiveBayes(object):
 @inherit_doc
 class StreamingLogisticRegressionWithSGD(StreamingLinearAlgorithm):
     """
-    Run LogisticRegression with SGD on a batch of data.
-
-    The weights obtained at the end of training a stream are used as initial
-    weights for the next batch.
-
-    :param stepSize: Step size for each iteration of gradient descent.
-    :param numIterations: Number of iterations run for each batch of data.
-    :param miniBatchFraction: Fraction of data on which SGD is run for each
-                              iteration.
-    :param regParam: L2 Regularization parameter.
-    :param convergenceTol: A condition which decides iteration termination.
+    Train or predict a logistic regression model on streaming data. Training uses
+    Stochastic Gradient Descent to update the model based on each new batch of
+    incoming data from a DStream.
+
+    Each batch of data is assumed to be an RDD of LabeledPoints.
+    The number of data points per batch can vary, but the number
+    of features must be constant. An initial weight
+    vector must be provided.
+
+    :param stepSize:
+      Step size for each iteration of gradient descent.
+      (default: 0.1)
+    :param numIterations:
+      Number of iterations run for each batch of data.
+      (default: 50)
+    :param miniBatchFraction:
+      Fraction of each batch of data to use for updates.
+      (default: 1.0)
+    :param regParam:
+      L2 Regularization parameter.
+      (default: 0.0)
+    :param convergenceTol:
+      Value used to determine when to terminate iterations.
+      (default: 0.001)
 
     .. versionadded:: 1.5.0
     """
-    def __init__(self, stepSize=0.1, numIterations=50, miniBatchFraction=1.0, regParam=0.01,
+    def __init__(self, stepSize=0.1, numIterations=50, miniBatchFraction=1.0, regParam=0.0,
                  convergenceTol=0.001):
         self.stepSize = stepSize
         self.numIterations = numIterations

http://git-wip-us.apache.org/repos/asf/spark/blob/159d631f/python/pyspark/mllib/regression.py
----------------------------------------------------------------------
diff --git a/python/pyspark/mllib/regression.py b/python/pyspark/mllib/regression.py
index 6f00d1d..13b3397 100644
--- a/python/pyspark/mllib/regression.py
+++ b/python/pyspark/mllib/regression.py
@@ -734,17 +734,27 @@ class StreamingLinearAlgorithm(object):
 @inherit_doc
 class StreamingLinearRegressionWithSGD(StreamingLinearAlgorithm):
     """
-    Run LinearRegression with SGD on a batch of data.
-
-    The problem minimized is (1 / n_samples) * (y - weights'X)**2.
-    After training on a batch of data, the weights obtained at the end of
-    training are used as initial weights for the next batch.
-
-    :param stepSize: Step size for each iteration of gradient descent.
-    :param numIterations: Total number of iterations run.
-    :param miniBatchFraction: Fraction of data on which SGD is run for each
-                              iteration.
-    :param convergenceTol: A condition which decides iteration termination.
+    Train or predict a linear regression model on streaming data. Training uses
+    Stochastic Gradient Descent to update the model based on each new batch of
+    incoming data from a DStream (see `LinearRegressionWithSGD` for model equation).
+
+    Each batch of data is assumed to be an RDD of LabeledPoints.
+    The number of data points per batch can vary, but the number
+    of features must be constant. An initial weight
+    vector must be provided.
+
+    :param stepSize:
+      Step size for each iteration of gradient descent.
+      (default: 0.1)
+    :param numIterations:
+      Number of iterations run for each batch of data.
+      (default: 50)
+    :param miniBatchFraction:
+      Fraction of each batch of data to use for updates.
+      (default: 1.0)
+    :param convergenceTol:
+      Value used to determine when to terminate iterations.
+      (default: 0.001)
 
     .. versionadded:: 1.5.0
     """


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


Mime
View raw message