spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From andrewo...@apache.org
Subject [2/2] spark git commit: [SPARK-15134][EXAMPLE] Indent SparkSession builder patterns and update binary_classification_metrics_example.py
Date Thu, 05 May 2016 21:38:12 GMT
[SPARK-15134][EXAMPLE] Indent SparkSession builder patterns and update binary_classification_metrics_example.py

## What changes were proposed in this pull request?

This issue addresses the comments in SPARK-15031 and also fix java-linter errors.
- Use multiline format in SparkSession builder patterns.
- Update `binary_classification_metrics_example.py` to use `SparkSession`.
- Fix Java Linter errors (in SPARK-13745, SPARK-15031, and so far)

## How was this patch tested?

After passing the Jenkins tests and run `dev/lint-java` manually.

Author: Dongjoon Hyun <dongjoon@apache.org>

Closes #12911 from dongjoon-hyun/SPARK-15134.

(cherry picked from commit 2c170dd3d731bd848d62265431795e1c141d75d7)
Signed-off-by: Andrew Or <andrew@databricks.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8b4ab590
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/8b4ab590
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/8b4ab590

Branch: refs/heads/branch-2.0
Commit: 8b4ab590cb18b926c71c4cb4ec5b184b1b566770
Parents: e78b31b
Author: Dongjoon Hyun <dongjoon@apache.org>
Authored: Thu May 5 14:37:50 2016 -0700
Committer: Andrew Or <andrew@databricks.com>
Committed: Thu May 5 14:38:02 2016 -0700

----------------------------------------------------------------------
 .../network/shuffle/ExternalShuffleBlockHandler.java |  7 +++++--
 .../ml/JavaAFTSurvivalRegressionExample.java         |  5 ++++-
 .../org/apache/spark/examples/ml/JavaALSExample.java |  5 ++++-
 .../spark/examples/ml/JavaBinarizerExample.java      |  8 ++++----
 .../examples/ml/JavaBisectingKMeansExample.java      |  5 ++++-
 .../spark/examples/ml/JavaBucketizerExample.java     |  5 ++++-
 .../spark/examples/ml/JavaChiSqSelectorExample.java  |  8 ++++----
 .../examples/ml/JavaCountVectorizerExample.java      |  5 ++++-
 .../org/apache/spark/examples/ml/JavaDCTExample.java |  8 ++++----
 .../spark/examples/ml/JavaDeveloperApiExample.java   |  5 ++++-
 .../examples/ml/JavaElementwiseProductExample.java   |  7 +++----
 .../ml/JavaGradientBoostedTreeClassifierExample.java | 10 ++++++----
 .../spark/examples/ml/JavaIndexToStringExample.java  |  5 ++++-
 .../apache/spark/examples/ml/JavaKMeansExample.java  |  5 ++++-
 .../org/apache/spark/examples/ml/JavaLDAExample.java |  5 ++++-
 .../spark/examples/ml/JavaMaxAbsScalerExample.java   | 14 ++++++++++----
 .../spark/examples/ml/JavaMinMaxScalerExample.java   | 10 ++++++++--
 .../apache/spark/examples/ml/JavaNGramExample.java   |  5 ++++-
 .../spark/examples/ml/JavaNaiveBayesExample.java     |  5 ++++-
 .../spark/examples/ml/JavaNormalizerExample.java     |  5 ++++-
 .../spark/examples/ml/JavaOneHotEncoderExample.java  |  5 ++++-
 .../spark/examples/ml/JavaOneVsRestExample.java      |  5 ++++-
 .../org/apache/spark/examples/ml/JavaPCAExample.java |  5 ++++-
 .../spark/examples/ml/JavaPipelineExample.java       |  5 ++++-
 .../examples/ml/JavaPolynomialExpansionExample.java  |  5 ++++-
 .../spark/examples/ml/JavaRFormulaExample.java       |  5 ++++-
 .../spark/examples/ml/JavaSQLTransformerExample.java |  5 ++++-
 .../spark/examples/ml/JavaSimpleParamsExample.java   |  5 ++++-
 .../spark/examples/ml/JavaStandardScalerExample.java |  5 ++++-
 .../examples/ml/JavaStopWordsRemoverExample.java     |  5 ++++-
 .../spark/examples/ml/JavaStringIndexerExample.java  |  5 ++++-
 .../apache/spark/examples/ml/JavaTfIdfExample.java   |  5 ++++-
 .../spark/examples/ml/JavaTokenizerExample.java      |  5 ++++-
 .../examples/ml/JavaVectorAssemblerExample.java      |  5 ++++-
 .../spark/examples/ml/JavaVectorIndexerExample.java  |  5 ++++-
 .../spark/examples/ml/JavaVectorSlicerExample.java   |  5 ++++-
 .../spark/examples/ml/JavaWord2VecExample.java       |  5 ++++-
 .../org/apache/spark/examples/sql/JavaSparkSQL.java  |  8 ++++++--
 .../examples/streaming/JavaSqlNetworkWordCount.java  |  5 ++++-
 examples/src/main/python/ml/als_example.py           |  5 ++++-
 examples/src/main/python/ml/binarizer_example.py     |  5 ++++-
 .../src/main/python/ml/bisecting_k_means_example.py  |  5 ++++-
 examples/src/main/python/ml/bucketizer_example.py    |  5 ++++-
 .../src/main/python/ml/chisq_selector_example.py     |  5 ++++-
 .../src/main/python/ml/count_vectorizer_example.py   |  5 ++++-
 examples/src/main/python/ml/cross_validator.py       |  5 ++++-
 examples/src/main/python/ml/dataframe_example.py     |  5 ++++-
 examples/src/main/python/ml/dct_example.py           |  5 ++++-
 .../ml/decision_tree_classification_example.py       |  5 ++++-
 .../python/ml/decision_tree_regression_example.py    |  5 ++++-
 .../main/python/ml/elementwise_product_example.py    |  5 ++++-
 .../python/ml/estimator_transformer_param_example.py |  5 ++++-
 .../ml/gradient_boosted_tree_classifier_example.py   |  5 ++++-
 .../ml/gradient_boosted_tree_regressor_example.py    |  5 ++++-
 .../src/main/python/ml/index_to_string_example.py    |  5 ++++-
 examples/src/main/python/ml/kmeans_example.py        |  5 ++++-
 .../python/ml/linear_regression_with_elastic_net.py  |  5 ++++-
 .../ml/logistic_regression_with_elastic_net.py       |  5 ++++-
 .../src/main/python/ml/max_abs_scaler_example.py     |  5 ++++-
 .../src/main/python/ml/min_max_scaler_example.py     |  5 ++++-
 examples/src/main/python/ml/n_gram_example.py        |  5 ++++-
 examples/src/main/python/ml/naive_bayes_example.py   |  5 ++++-
 examples/src/main/python/ml/normalizer_example.py    |  5 ++++-
 .../src/main/python/ml/onehot_encoder_example.py     |  5 ++++-
 examples/src/main/python/ml/pca_example.py           |  5 ++++-
 examples/src/main/python/ml/pipeline_example.py      |  5 ++++-
 .../main/python/ml/polynomial_expansion_example.py   |  5 ++++-
 .../python/ml/random_forest_classifier_example.py    |  5 ++++-
 .../python/ml/random_forest_regressor_example.py     |  5 ++++-
 examples/src/main/python/ml/rformula_example.py      |  5 ++++-
 .../python/ml/simple_text_classification_pipeline.py |  5 ++++-
 examples/src/main/python/ml/sql_transformer.py       |  5 ++++-
 .../src/main/python/ml/standard_scaler_example.py    |  5 ++++-
 .../src/main/python/ml/stopwords_remover_example.py  |  5 ++++-
 .../src/main/python/ml/string_indexer_example.py     |  5 ++++-
 examples/src/main/python/ml/tf_idf_example.py        |  5 ++++-
 examples/src/main/python/ml/tokenizer_example.py     |  5 ++++-
 .../src/main/python/ml/train_validation_split.py     |  5 ++++-
 .../src/main/python/ml/vector_assembler_example.py   |  5 ++++-
 .../src/main/python/ml/vector_indexer_example.py     |  5 ++++-
 examples/src/main/python/ml/vector_slicer_example.py |  5 ++++-
 examples/src/main/python/ml/word2vec_example.py      |  5 ++++-
 .../mllib/binary_classification_metrics_example.py   | 15 ++++++++++-----
 examples/src/main/python/sql.py                      |  5 ++++-
 .../main/python/streaming/sql_network_wordcount.py   |  6 ++++--
 .../examples/ml/AFTSurvivalRegressionExample.scala   |  5 ++++-
 .../org/apache/spark/examples/ml/ALSExample.scala    |  5 ++++-
 .../apache/spark/examples/ml/BinarizerExample.scala  |  5 ++++-
 .../apache/spark/examples/ml/BucketizerExample.scala |  5 ++++-
 .../spark/examples/ml/ChiSqSelectorExample.scala     |  5 ++++-
 .../spark/examples/ml/CountVectorizerExample.scala   |  5 ++++-
 .../org/apache/spark/examples/ml/DCTExample.scala    |  5 ++++-
 .../apache/spark/examples/ml/DataFrameExample.scala  |  5 ++++-
 .../ml/DecisionTreeClassificationExample.scala       |  5 ++++-
 .../spark/examples/ml/DecisionTreeExample.scala      |  4 +++-
 .../examples/ml/DecisionTreeRegressionExample.scala  |  5 ++++-
 .../spark/examples/ml/DeveloperApiExample.scala      |  5 ++++-
 .../examples/ml/ElementwiseProductExample.scala      |  5 ++++-
 .../ml/EstimatorTransformerParamExample.scala        |  5 ++++-
 .../ml/GradientBoostedTreeClassifierExample.scala    |  5 ++++-
 .../ml/GradientBoostedTreeRegressorExample.scala     |  5 ++++-
 .../spark/examples/ml/IndexToStringExample.scala     |  5 ++++-
 .../org/apache/spark/examples/ml/KMeansExample.scala |  5 ++++-
 .../org/apache/spark/examples/ml/LDAExample.scala    |  5 ++++-
 .../ml/LinearRegressionWithElasticNetExample.scala   |  5 ++++-
 .../ml/LogisticRegressionSummaryExample.scala        |  5 ++++-
 .../spark/examples/ml/MaxAbsScalerExample.scala      |  5 ++++-
 .../spark/examples/ml/MinMaxScalerExample.scala      |  5 ++++-
 .../ml/MultilayerPerceptronClassifierExample.scala   |  5 ++++-
 .../org/apache/spark/examples/ml/NGramExample.scala  |  5 ++++-
 .../apache/spark/examples/ml/NaiveBayesExample.scala |  5 ++++-
 .../apache/spark/examples/ml/NormalizerExample.scala |  5 ++++-
 .../spark/examples/ml/OneHotEncoderExample.scala     |  5 ++++-
 .../apache/spark/examples/ml/OneVsRestExample.scala  |  5 ++++-
 .../org/apache/spark/examples/ml/PCAExample.scala    |  5 ++++-
 .../apache/spark/examples/ml/PipelineExample.scala   |  5 ++++-
 .../examples/ml/PolynomialExpansionExample.scala     |  5 ++++-
 .../examples/ml/QuantileDiscretizerExample.scala     |  5 ++++-
 .../apache/spark/examples/ml/RFormulaExample.scala   |  5 ++++-
 .../examples/ml/RandomForestClassifierExample.scala  |  5 ++++-
 .../examples/ml/RandomForestRegressorExample.scala   |  5 ++++-
 .../spark/examples/ml/SQLTransformerExample.scala    |  5 ++++-
 .../spark/examples/ml/SimpleParamsExample.scala      |  5 ++++-
 .../ml/SimpleTextClassificationPipeline.scala        |  5 ++++-
 .../spark/examples/ml/StandardScalerExample.scala    |  5 ++++-
 .../spark/examples/ml/StopWordsRemoverExample.scala  |  5 ++++-
 .../spark/examples/ml/StringIndexerExample.scala     |  5 ++++-
 .../org/apache/spark/examples/ml/TfIdfExample.scala  |  5 ++++-
 .../apache/spark/examples/ml/TokenizerExample.scala  |  5 ++++-
 .../spark/examples/ml/VectorAssemblerExample.scala   |  5 ++++-
 .../spark/examples/ml/VectorIndexerExample.scala     |  5 ++++-
 .../spark/examples/ml/VectorSlicerExample.scala      |  5 ++++-
 .../apache/spark/examples/ml/Word2VecExample.scala   |  5 ++++-
 .../org/apache/spark/examples/mllib/LDAExample.scala |  4 +++-
 .../spark/examples/mllib/RankingMetricsExample.scala |  5 ++++-
 .../examples/mllib/RegressionMetricsExample.scala    |  5 ++++-
 .../org/apache/spark/examples/sql/RDDRelation.scala  |  5 ++++-
 .../examples/streaming/SqlNetworkWordCount.scala     |  5 ++++-
 .../parquet/VectorizedPlainValuesReader.java         |  5 +++--
 .../execution/vectorized/OffHeapColumnVector.java    | 15 ++++++++-------
 .../sql/execution/vectorized/OnHeapColumnVector.java |  7 ++++---
 .../hive/service/cli/session/SessionManager.java     |  2 --
 142 files changed, 585 insertions(+), 178 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/ExternalShuffleBlockHandler.java
----------------------------------------------------------------------
diff --git a/common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/ExternalShuffleBlockHandler.java b/common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/ExternalShuffleBlockHandler.java
index fb1226c..22fd592 100644
--- a/common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/ExternalShuffleBlockHandler.java
+++ b/common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/ExternalShuffleBlockHandler.java
@@ -87,8 +87,11 @@ public class ExternalShuffleBlockHandler extends RpcHandler {
         blocks.add(blockManager.getBlockData(msg.appId, msg.execId, blockId));
       }
       long streamId = streamManager.registerStream(client.getClientId(), blocks.iterator());
-      logger.trace("Registered streamId {} with {} buffers for client {} from host {}", streamId, 
-        msg.blockIds.length, client.getClientId(), NettyUtils.getRemoteAddress(client.getChannel()));
+      logger.trace("Registered streamId {} with {} buffers for client {} from host {}",
+          streamId,
+          msg.blockIds.length,
+          client.getClientId(),
+          NettyUtils.getRemoteAddress(client.getChannel()));
       callback.onSuccess(new StreamHandle(streamId, msg.blockIds.length).toByteBuffer());
 
     } else if (msgObj instanceof RegisterExecutor) {

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaAFTSurvivalRegressionExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaAFTSurvivalRegressionExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaAFTSurvivalRegressionExample.java
index ecb7084..2c2aa6d 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaAFTSurvivalRegressionExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaAFTSurvivalRegressionExample.java
@@ -33,7 +33,10 @@ import org.apache.spark.sql.types.*;
 
 public class JavaAFTSurvivalRegressionExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaAFTSurvivalRegressionExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaAFTSurvivalRegressionExample")
+      .getOrCreate();
 
     // $example on$
     List<Row> data = Arrays.asList(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaALSExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaALSExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaALSExample.java
index 9a9a104..4b13ba6 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaALSExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaALSExample.java
@@ -81,7 +81,10 @@ public class JavaALSExample {
   // $example off$
 
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaALSExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaALSExample")
+      .getOrCreate();
 
     // $example on$
     JavaRDD<Rating> ratingsRDD = spark

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaBinarizerExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaBinarizerExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaBinarizerExample.java
index 88e4298..5f964ac 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaBinarizerExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaBinarizerExample.java
@@ -17,8 +17,6 @@
 
 package org.apache.spark.examples.ml;
 
-import org.apache.spark.SparkConf;
-import org.apache.spark.api.java.JavaSparkContext;
 import org.apache.spark.sql.Dataset;
 import org.apache.spark.sql.SparkSession;
 
@@ -26,7 +24,6 @@ import org.apache.spark.sql.SparkSession;
 import java.util.Arrays;
 import java.util.List;
 
-import org.apache.spark.api.java.JavaRDD;
 import org.apache.spark.ml.feature.Binarizer;
 import org.apache.spark.sql.Row;
 import org.apache.spark.sql.RowFactory;
@@ -38,7 +35,10 @@ import org.apache.spark.sql.types.StructType;
 
 public class JavaBinarizerExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaBinarizerExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaBinarizerExample")
+      .getOrCreate();
 
     // $example on$
     List<Row> data = Arrays.asList(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaBisectingKMeansExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaBisectingKMeansExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaBisectingKMeansExample.java
index 51aa350..810ad90 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaBisectingKMeansExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaBisectingKMeansExample.java
@@ -42,7 +42,10 @@ import org.apache.spark.sql.types.StructType;
 public class JavaBisectingKMeansExample {
 
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaBisectingKMeansExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaBisectingKMeansExample")
+      .getOrCreate();
 
     // $example on$
     List<Row> data = Arrays.asList(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaBucketizerExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaBucketizerExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaBucketizerExample.java
index 0c24f52..691df38 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaBucketizerExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaBucketizerExample.java
@@ -35,7 +35,10 @@ import org.apache.spark.sql.types.StructType;
 
 public class JavaBucketizerExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaBucketizerExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaBucketizerExample")
+      .getOrCreate();
 
     // $example on$
     double[] splits = {Double.NEGATIVE_INFINITY, -0.5, 0.0, 0.5, Double.POSITIVE_INFINITY};

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaChiSqSelectorExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaChiSqSelectorExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaChiSqSelectorExample.java
index 684cf9a..f8f2fb1 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaChiSqSelectorExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaChiSqSelectorExample.java
@@ -17,9 +17,6 @@
 
 package org.apache.spark.examples.ml;
 
-import org.apache.spark.SparkConf;
-import org.apache.spark.api.java.JavaRDD;
-import org.apache.spark.api.java.JavaSparkContext;
 import org.apache.spark.sql.Dataset;
 import org.apache.spark.sql.SparkSession;
 
@@ -40,7 +37,10 @@ import org.apache.spark.sql.types.StructType;
 
 public class JavaChiSqSelectorExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaChiSqSelectorExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaChiSqSelectorExample")
+      .getOrCreate();
 
     // $example on$
     List<Row> data = Arrays.asList(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaCountVectorizerExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaCountVectorizerExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaCountVectorizerExample.java
index 0631f9d..0a6b136 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaCountVectorizerExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaCountVectorizerExample.java
@@ -32,7 +32,10 @@ import org.apache.spark.sql.types.*;
 
 public class JavaCountVectorizerExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaCountVectorizerExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaCountVectorizerExample")
+      .getOrCreate();
 
     // $example on$
     // Input data: Each row is a bag of words from a sentence or document.

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaDCTExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaDCTExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaDCTExample.java
index ec57a24..eee92c7 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaDCTExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaDCTExample.java
@@ -17,8 +17,6 @@
 
 package org.apache.spark.examples.ml;
 
-import org.apache.spark.SparkConf;
-import org.apache.spark.api.java.JavaSparkContext;
 import org.apache.spark.sql.Dataset;
 import org.apache.spark.sql.SparkSession;
 
@@ -26,7 +24,6 @@ import org.apache.spark.sql.SparkSession;
 import java.util.Arrays;
 import java.util.List;
 
-import org.apache.spark.api.java.JavaRDD;
 import org.apache.spark.ml.feature.DCT;
 import org.apache.spark.mllib.linalg.VectorUDT;
 import org.apache.spark.mllib.linalg.Vectors;
@@ -39,7 +36,10 @@ import org.apache.spark.sql.types.StructType;
 
 public class JavaDCTExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaDCTExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaDCTExample")
+      .getOrCreate();
 
     // $example on$
     List<Row> data = Arrays.asList(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaDeveloperApiExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaDeveloperApiExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaDeveloperApiExample.java
index 90023ac..49bad0a 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaDeveloperApiExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaDeveloperApiExample.java
@@ -49,7 +49,10 @@ import org.apache.spark.sql.SparkSession;
 public class JavaDeveloperApiExample {
 
   public static void main(String[] args) throws Exception {
-    SparkSession spark = SparkSession.builder().appName("JavaDeveloperApiExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaDeveloperApiExample")
+      .getOrCreate();
 
     // Prepare training data.
     List<LabeledPoint> localTraining = Lists.newArrayList(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaElementwiseProductExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaElementwiseProductExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaElementwiseProductExample.java
index a062a6f..9126242 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaElementwiseProductExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaElementwiseProductExample.java
@@ -17,8 +17,6 @@
 
 package org.apache.spark.examples.ml;
 
-import org.apache.spark.SparkConf;
-import org.apache.spark.api.java.JavaSparkContext;
 import org.apache.spark.sql.Dataset;
 import org.apache.spark.sql.SparkSession;
 
@@ -27,7 +25,6 @@ import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
 
-import org.apache.spark.api.java.JavaRDD;
 import org.apache.spark.ml.feature.ElementwiseProduct;
 import org.apache.spark.mllib.linalg.Vector;
 import org.apache.spark.mllib.linalg.VectorUDT;
@@ -42,7 +39,9 @@ import org.apache.spark.sql.types.StructType;
 public class JavaElementwiseProductExample {
   public static void main(String[] args) {
     SparkSession spark = SparkSession
-      .builder().appName("JavaElementwiseProductExample").getOrCreate();
+      .builder()
+      .appName("JavaElementwiseProductExample")
+      .getOrCreate();
 
     // $example on$
     // Create some vector data; also works for sparse vectors

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaGradientBoostedTreeClassifierExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaGradientBoostedTreeClassifierExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaGradientBoostedTreeClassifierExample.java
index a7c89b9..baacd79 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaGradientBoostedTreeClassifierExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaGradientBoostedTreeClassifierExample.java
@@ -17,8 +17,6 @@
 
 package org.apache.spark.examples.ml;
 
-import org.apache.spark.SparkConf;
-import org.apache.spark.api.java.JavaSparkContext;
 // $example on$
 import org.apache.spark.ml.Pipeline;
 import org.apache.spark.ml.PipelineModel;
@@ -35,11 +33,15 @@ import org.apache.spark.sql.SparkSession;
 public class JavaGradientBoostedTreeClassifierExample {
   public static void main(String[] args) {
     SparkSession spark = SparkSession
-      .builder().appName("JavaGradientBoostedTreeClassifierExample").getOrCreate();
+      .builder()
+      .appName("JavaGradientBoostedTreeClassifierExample")
+      .getOrCreate();
 
     // $example on$
     // Load and parse the data file, converting it to a DataFrame.
-    Dataset<Row> data = spark.read().format("libsvm")
+    Dataset<Row> data = spark
+      .read()
+      .format("libsvm")
       .load("data/mllib/sample_libsvm_data.txt");
 
     // Index labels, adding metadata to the label column.

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaIndexToStringExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaIndexToStringExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaIndexToStringExample.java
index ccd74f2..0064beb 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaIndexToStringExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaIndexToStringExample.java
@@ -37,7 +37,10 @@ import org.apache.spark.sql.types.StructType;
 
 public class JavaIndexToStringExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaIndexToStringExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaIndexToStringExample")
+      .getOrCreate();
 
     // $example on$
     List<Row> data = Arrays.asList(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaKMeansExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaKMeansExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaKMeansExample.java
index e6d82a0..65e29ad 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaKMeansExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaKMeansExample.java
@@ -70,7 +70,10 @@ public class JavaKMeansExample {
     int k = Integer.parseInt(args[1]);
 
     // Parses the arguments
-    SparkSession spark = SparkSession.builder().appName("JavaKMeansExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaKMeansExample")
+      .getOrCreate();
 
     // $example on$
     // Loads data

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaLDAExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaLDAExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaLDAExample.java
index b8baca5..1c52f37 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaLDAExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaLDAExample.java
@@ -65,7 +65,10 @@ public class JavaLDAExample {
     String inputFile = "data/mllib/sample_lda_data.txt";
 
     // Parses the arguments
-    SparkSession spark = SparkSession.builder().appName("JavaLDAExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaLDAExample")
+      .getOrCreate();
 
     // Loads data
     JavaRDD<Row> points = spark.read().text(inputFile).javaRDD().map(new ParseVector());

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaMaxAbsScalerExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaMaxAbsScalerExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaMaxAbsScalerExample.java
index 80cdd36..9a27b0e 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaMaxAbsScalerExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaMaxAbsScalerExample.java
@@ -28,13 +28,19 @@ import org.apache.spark.sql.SparkSession;
 public class JavaMaxAbsScalerExample {
 
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaMaxAbsScalerExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaMaxAbsScalerExample")
+      .getOrCreate();
 
     // $example on$
-    Dataset<Row> dataFrame = spark.read().format("libsvm").load("data/mllib/sample_libsvm_data.txt");
+    Dataset<Row> dataFrame = spark
+      .read()
+      .format("libsvm")
+      .load("data/mllib/sample_libsvm_data.txt");
     MaxAbsScaler scaler = new MaxAbsScaler()
-        .setInputCol("features")
-        .setOutputCol("scaledFeatures");
+      .setInputCol("features")
+      .setOutputCol("scaledFeatures");
 
     // Compute summary statistics and generate MaxAbsScalerModel
     MaxAbsScalerModel scalerModel = scaler.fit(dataFrame);

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaMinMaxScalerExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaMinMaxScalerExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaMinMaxScalerExample.java
index 022940f..37fa1c5 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaMinMaxScalerExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaMinMaxScalerExample.java
@@ -28,10 +28,16 @@ import org.apache.spark.sql.Row;
 
 public class JavaMinMaxScalerExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaMinMaxScalerExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaMinMaxScalerExample")
+      .getOrCreate();
 
     // $example on$
-    Dataset<Row> dataFrame = spark.read().format("libsvm").load("data/mllib/sample_libsvm_data.txt");
+    Dataset<Row> dataFrame = spark
+      .read()
+      .format("libsvm")
+      .load("data/mllib/sample_libsvm_data.txt");
     MinMaxScaler scaler = new MinMaxScaler()
       .setInputCol("features")
       .setOutputCol("scaledFeatures");

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaNGramExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaNGramExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaNGramExample.java
index 325b7b5..899815f 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaNGramExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaNGramExample.java
@@ -35,7 +35,10 @@ import org.apache.spark.sql.types.StructType;
 
 public class JavaNGramExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaNGramExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaNGramExample")
+      .getOrCreate();
 
     // $example on$
     List<Row> data = Arrays.asList(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaNaiveBayesExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaNaiveBayesExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaNaiveBayesExample.java
index 1f24a23..50a46a5 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaNaiveBayesExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaNaiveBayesExample.java
@@ -32,7 +32,10 @@ import org.apache.spark.sql.SparkSession;
 public class JavaNaiveBayesExample {
 
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaNaiveBayesExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaNaiveBayesExample")
+      .getOrCreate();
 
     // $example on$
     // Load training data

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaNormalizerExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaNormalizerExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaNormalizerExample.java
index 4b3a718..abc38f8 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaNormalizerExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaNormalizerExample.java
@@ -27,7 +27,10 @@ import org.apache.spark.sql.Row;
 
 public class JavaNormalizerExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaNormalizerExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaNormalizerExample")
+      .getOrCreate();
 
     // $example on$
     Dataset<Row> dataFrame =

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaOneHotEncoderExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaOneHotEncoderExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaOneHotEncoderExample.java
index d6e4d21..5d29e54 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaOneHotEncoderExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaOneHotEncoderExample.java
@@ -37,7 +37,10 @@ import org.apache.spark.sql.types.StructType;
 
 public class JavaOneHotEncoderExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaOneHotEncoderExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaOneHotEncoderExample")
+      .getOrCreate();
 
     // $example on$
     List<Row> data = Arrays.asList(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaOneVsRestExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaOneVsRestExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaOneVsRestExample.java
index 9cc983b..e0cb752 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaOneVsRestExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaOneVsRestExample.java
@@ -58,7 +58,10 @@ public class JavaOneVsRestExample {
   public static void main(String[] args) {
     // parse the arguments
     Params params = parse(args);
-    SparkSession spark = SparkSession.builder().appName("JavaOneVsRestExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaOneVsRestExample")
+      .getOrCreate();
 
     // $example on$
     // configure the base classifier

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaPCAExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaPCAExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaPCAExample.java
index 6b1dcb6..ffa979e 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaPCAExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaPCAExample.java
@@ -37,7 +37,10 @@ import org.apache.spark.sql.types.StructType;
 
 public class JavaPCAExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaPCAExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaPCAExample")
+      .getOrCreate();
 
     // $example on$
     List<Row> data = Arrays.asList(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaPipelineExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaPipelineExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaPipelineExample.java
index 556a457..9a43189 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaPipelineExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaPipelineExample.java
@@ -36,7 +36,10 @@ import org.apache.spark.sql.SparkSession;
  */
 public class JavaPipelineExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaPipelineExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaPipelineExample")
+      .getOrCreate();
 
     // $example on$
     // Prepare training documents, which are labeled.

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaPolynomialExpansionExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaPolynomialExpansionExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaPolynomialExpansionExample.java
index e328454..7afcd0e 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaPolynomialExpansionExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaPolynomialExpansionExample.java
@@ -36,7 +36,10 @@ import org.apache.spark.sql.types.StructType;
 
 public class JavaPolynomialExpansionExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaPolynomialExpansionExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaPolynomialExpansionExample")
+      .getOrCreate();
 
     // $example on$
     PolynomialExpansion polyExpansion = new PolynomialExpansion()

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaRFormulaExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaRFormulaExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaRFormulaExample.java
index 8282ce0..428067e 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaRFormulaExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaRFormulaExample.java
@@ -35,7 +35,10 @@ import static org.apache.spark.sql.types.DataTypes.*;
 
 public class JavaRFormulaExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaRFormulaExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaRFormulaExample")
+      .getOrCreate();
 
     // $example on$
     StructType schema = createStructType(new StructField[]{

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaSQLTransformerExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaSQLTransformerExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaSQLTransformerExample.java
index 492718b..2a3d62d 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaSQLTransformerExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaSQLTransformerExample.java
@@ -31,7 +31,10 @@ import org.apache.spark.sql.types.*;
 
 public class JavaSQLTransformerExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaSQLTransformerExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaSQLTransformerExample")
+      .getOrCreate();
 
     // $example on$
     List<Row> data = Arrays.asList(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaSimpleParamsExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaSimpleParamsExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaSimpleParamsExample.java
index f906843..0787079 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaSimpleParamsExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaSimpleParamsExample.java
@@ -40,7 +40,10 @@ import org.apache.spark.sql.SparkSession;
 public class JavaSimpleParamsExample {
 
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaSimpleParamsExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaSimpleParamsExample")
+      .getOrCreate();
 
     // Prepare training data.
     // We use LabeledPoint, which is a JavaBean.  Spark SQL can convert RDDs of JavaBeans

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaStandardScalerExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaStandardScalerExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaStandardScalerExample.java
index 10f82f2..08ea285 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaStandardScalerExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaStandardScalerExample.java
@@ -28,7 +28,10 @@ import org.apache.spark.sql.Row;
 
 public class JavaStandardScalerExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaStandardScalerExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaStandardScalerExample")
+      .getOrCreate();
 
     // $example on$
     Dataset<Row> dataFrame =

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaStopWordsRemoverExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaStopWordsRemoverExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaStopWordsRemoverExample.java
index 23ed071..def5994 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaStopWordsRemoverExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaStopWordsRemoverExample.java
@@ -36,7 +36,10 @@ import org.apache.spark.sql.types.StructType;
 public class JavaStopWordsRemoverExample {
 
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaStopWordsRemoverExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaStopWordsRemoverExample")
+      .getOrCreate();
 
     // $example on$
     StopWordsRemover remover = new StopWordsRemover()

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaStringIndexerExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaStringIndexerExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaStringIndexerExample.java
index d4c2cf9..7533c18 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaStringIndexerExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaStringIndexerExample.java
@@ -35,7 +35,10 @@ import static org.apache.spark.sql.types.DataTypes.*;
 
 public class JavaStringIndexerExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaStringIndexerExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaStringIndexerExample")
+      .getOrCreate();
 
     // $example on$
     List<Row> data = Arrays.asList(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaTfIdfExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaTfIdfExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaTfIdfExample.java
index a816991..6e07539 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaTfIdfExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaTfIdfExample.java
@@ -38,7 +38,10 @@ import org.apache.spark.sql.types.StructType;
 
 public class JavaTfIdfExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaTfIdfExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaTfIdfExample")
+      .getOrCreate();
 
     // $example on$
     List<Row> data = Arrays.asList(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaTokenizerExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaTokenizerExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaTokenizerExample.java
index a65735a..1cc16bb 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaTokenizerExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaTokenizerExample.java
@@ -36,7 +36,10 @@ import org.apache.spark.sql.types.StructType;
 
 public class JavaTokenizerExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaTokenizerExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaTokenizerExample")
+      .getOrCreate();
 
     // $example on$
     List<Row> data = Arrays.asList(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaVectorAssemblerExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaVectorAssemblerExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaVectorAssemblerExample.java
index 9569bc2..41f1d87 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaVectorAssemblerExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaVectorAssemblerExample.java
@@ -35,7 +35,10 @@ import static org.apache.spark.sql.types.DataTypes.*;
 
 public class JavaVectorAssemblerExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaVectorAssemblerExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaVectorAssemblerExample")
+      .getOrCreate();
 
     // $example on$
     StructType schema = createStructType(new StructField[]{

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaVectorIndexerExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaVectorIndexerExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaVectorIndexerExample.java
index 217d5a0..dd9d757 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaVectorIndexerExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaVectorIndexerExample.java
@@ -30,7 +30,10 @@ import org.apache.spark.sql.Row;
 
 public class JavaVectorIndexerExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaVectorIndexerExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaVectorIndexerExample")
+      .getOrCreate();
 
     // $example on$
     Dataset<Row> data = spark.read().format("libsvm").load("data/mllib/sample_libsvm_data.txt");

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaVectorSlicerExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaVectorSlicerExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaVectorSlicerExample.java
index 4f1ea82..24959c0 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaVectorSlicerExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaVectorSlicerExample.java
@@ -37,7 +37,10 @@ import org.apache.spark.sql.types.*;
 
 public class JavaVectorSlicerExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaVectorSlicerExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaVectorSlicerExample")
+      .getOrCreate();
 
     // $example on$
     Attribute[] attrs = new Attribute[]{

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/ml/JavaWord2VecExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaWord2VecExample.java b/examples/src/main/java/org/apache/spark/examples/ml/JavaWord2VecExample.java
index d9b1a79..9be6e63 100644
--- a/examples/src/main/java/org/apache/spark/examples/ml/JavaWord2VecExample.java
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaWord2VecExample.java
@@ -32,7 +32,10 @@ import org.apache.spark.sql.types.*;
 
 public class JavaWord2VecExample {
   public static void main(String[] args) {
-    SparkSession spark = SparkSession.builder().appName("JavaWord2VecExample").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaWord2VecExample")
+      .getOrCreate();
 
     // $example on$
     // Input data: Each row is a bag of words from a sentence or document.

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/sql/JavaSparkSQL.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/sql/JavaSparkSQL.java b/examples/src/main/java/org/apache/spark/examples/sql/JavaSparkSQL.java
index ec2142e..755b4f5 100644
--- a/examples/src/main/java/org/apache/spark/examples/sql/JavaSparkSQL.java
+++ b/examples/src/main/java/org/apache/spark/examples/sql/JavaSparkSQL.java
@@ -51,7 +51,10 @@ public class JavaSparkSQL {
   }
 
   public static void main(String[] args) throws Exception {
-    SparkSession spark = SparkSession.builder().appName("JavaSparkSQL").getOrCreate();
+    SparkSession spark = SparkSession
+      .builder()
+      .appName("JavaSparkSQL")
+      .getOrCreate();
 
     System.out.println("=== Data source: RDD ===");
     // Load a text file and convert each line to a Java Bean.
@@ -147,7 +150,8 @@ public class JavaSparkSQL {
     // a RDD[String] storing one JSON object per string.
     List<String> jsonData = Arrays.asList(
           "{\"name\":\"Yin\",\"address\":{\"city\":\"Columbus\",\"state\":\"Ohio\"}}");
-    JavaRDD<String> anotherPeopleRDD = spark.createDataFrame(jsonData, String.class).toJSON().javaRDD();
+    JavaRDD<String> anotherPeopleRDD = spark
+      .createDataFrame(jsonData, String.class).toJSON().javaRDD();
     Dataset<Row> peopleFromJsonRDD = spark.read().json(anotherPeopleRDD);
 
     // Take a look at the schema of this new DataFrame.

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/java/org/apache/spark/examples/streaming/JavaSqlNetworkWordCount.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/streaming/JavaSqlNetworkWordCount.java b/examples/src/main/java/org/apache/spark/examples/streaming/JavaSqlNetworkWordCount.java
index 44f1e80..57953ef 100644
--- a/examples/src/main/java/org/apache/spark/examples/streaming/JavaSqlNetworkWordCount.java
+++ b/examples/src/main/java/org/apache/spark/examples/streaming/JavaSqlNetworkWordCount.java
@@ -115,7 +115,10 @@ class JavaSparkSessionSingleton {
   private static transient SparkSession instance = null;
   public static SparkSession getInstance(SparkConf sparkConf) {
     if (instance == null) {
-      instance = SparkSession.builder().config(sparkConf).getOrCreate();
+      instance = SparkSession
+        .builder()
+        .config(sparkConf)
+        .getOrCreate();
     }
     return instance;
   }

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/als_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/als_example.py b/examples/src/main/python/ml/als_example.py
index e36444f..ff0829b 100644
--- a/examples/src/main/python/ml/als_example.py
+++ b/examples/src/main/python/ml/als_example.py
@@ -30,7 +30,10 @@ from pyspark.sql import Row
 # $example off$
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("ALSExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("ALSExample")\
+        .getOrCreate()
 
     # $example on$
     lines = spark.read.text("data/mllib/als/sample_movielens_ratings.txt").rdd

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/binarizer_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/binarizer_example.py b/examples/src/main/python/ml/binarizer_example.py
index 072187e..4224a27 100644
--- a/examples/src/main/python/ml/binarizer_example.py
+++ b/examples/src/main/python/ml/binarizer_example.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import Binarizer
 # $example off$
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("BinarizerExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("BinarizerExample")\
+        .getOrCreate()
 
     # $example on$
     continuousDataFrame = spark.createDataFrame([

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/bisecting_k_means_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/bisecting_k_means_example.py b/examples/src/main/python/ml/bisecting_k_means_example.py
index 836a89c..540a4bc 100644
--- a/examples/src/main/python/ml/bisecting_k_means_example.py
+++ b/examples/src/main/python/ml/bisecting_k_means_example.py
@@ -30,7 +30,10 @@ A simple example demonstrating a bisecting k-means clustering.
 """
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("PythonBisectingKMeansExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("PythonBisectingKMeansExample")\
+        .getOrCreate()
 
     # $example on$
     data = spark.read.text("data/mllib/kmeans_data.txt").rdd

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/bucketizer_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/bucketizer_example.py b/examples/src/main/python/ml/bucketizer_example.py
index 288ec62..8177e56 100644
--- a/examples/src/main/python/ml/bucketizer_example.py
+++ b/examples/src/main/python/ml/bucketizer_example.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import Bucketizer
 # $example off$
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("BucketizerExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("BucketizerExample")\
+        .getOrCreate()
 
     # $example on$
     splits = [-float("inf"), -0.5, 0.0, 0.5, float("inf")]

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/chisq_selector_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/chisq_selector_example.py b/examples/src/main/python/ml/chisq_selector_example.py
index 8f58fc2..8bafb94 100644
--- a/examples/src/main/python/ml/chisq_selector_example.py
+++ b/examples/src/main/python/ml/chisq_selector_example.py
@@ -24,7 +24,10 @@ from pyspark.mllib.linalg import Vectors
 # $example off$
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("ChiSqSelectorExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("ChiSqSelectorExample")\
+        .getOrCreate()
 
     # $example on$
     df = spark.createDataFrame([

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/count_vectorizer_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/count_vectorizer_example.py b/examples/src/main/python/ml/count_vectorizer_example.py
index 9dbf995..38cfac8 100644
--- a/examples/src/main/python/ml/count_vectorizer_example.py
+++ b/examples/src/main/python/ml/count_vectorizer_example.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import CountVectorizer
 # $example off$
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("CountVectorizerExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("CountVectorizerExample")\
+        .getOrCreate()
 
     # $example on$
     # Input data: Each row is a bag of words with a ID.

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/cross_validator.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/cross_validator.py b/examples/src/main/python/ml/cross_validator.py
index a61d0f6..a41df6c 100644
--- a/examples/src/main/python/ml/cross_validator.py
+++ b/examples/src/main/python/ml/cross_validator.py
@@ -35,7 +35,10 @@ Run with:
 """
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("CrossValidatorExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("CrossValidatorExample")\
+        .getOrCreate()
     # $example on$
     # Prepare training documents, which are labeled.
     training = spark.createDataFrame([

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/dataframe_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/dataframe_example.py b/examples/src/main/python/ml/dataframe_example.py
index b3e6710..a7d8b90 100644
--- a/examples/src/main/python/ml/dataframe_example.py
+++ b/examples/src/main/python/ml/dataframe_example.py
@@ -33,7 +33,10 @@ if __name__ == "__main__":
     if len(sys.argv) > 2:
         print("Usage: dataframe_example.py <libsvm file>", file=sys.stderr)
         exit(-1)
-    spark = SparkSession.builder.appName("DataFrameExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("DataFrameExample")\
+        .getOrCreate()
     if len(sys.argv) == 2:
         input = sys.argv[1]
     else:

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/dct_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/dct_example.py b/examples/src/main/python/ml/dct_example.py
index 1bf8fc6..e36fcde 100644
--- a/examples/src/main/python/ml/dct_example.py
+++ b/examples/src/main/python/ml/dct_example.py
@@ -24,7 +24,10 @@ from pyspark.mllib.linalg import Vectors
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("DCTExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("DCTExample")\
+        .getOrCreate()
 
     # $example on$
     df = spark.createDataFrame([

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/decision_tree_classification_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/decision_tree_classification_example.py b/examples/src/main/python/ml/decision_tree_classification_example.py
index d2318e2..9b40b70 100644
--- a/examples/src/main/python/ml/decision_tree_classification_example.py
+++ b/examples/src/main/python/ml/decision_tree_classification_example.py
@@ -29,7 +29,10 @@ from pyspark.ml.evaluation import MulticlassClassificationEvaluator
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("decision_tree_classification_example").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("decision_tree_classification_example")\
+        .getOrCreate()
 
     # $example on$
     # Load the data stored in LIBSVM format as a DataFrame.

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/decision_tree_regression_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/decision_tree_regression_example.py b/examples/src/main/python/ml/decision_tree_regression_example.py
index 9e8cb38..b734d49 100644
--- a/examples/src/main/python/ml/decision_tree_regression_example.py
+++ b/examples/src/main/python/ml/decision_tree_regression_example.py
@@ -29,7 +29,10 @@ from pyspark.ml.evaluation import RegressionEvaluator
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("decision_tree_classification_example").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("decision_tree_classification_example")\
+        .getOrCreate()
 
     # $example on$
     # Load the data stored in LIBSVM format as a DataFrame.

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/elementwise_product_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/elementwise_product_example.py b/examples/src/main/python/ml/elementwise_product_example.py
index 6fa641b..41727ed 100644
--- a/examples/src/main/python/ml/elementwise_product_example.py
+++ b/examples/src/main/python/ml/elementwise_product_example.py
@@ -24,7 +24,10 @@ from pyspark.mllib.linalg import Vectors
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("ElementwiseProductExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("ElementwiseProductExample")\
+        .getOrCreate()
 
     # $example on$
     data = [(Vectors.dense([1.0, 2.0, 3.0]),), (Vectors.dense([4.0, 5.0, 6.0]),)]

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/estimator_transformer_param_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/estimator_transformer_param_example.py b/examples/src/main/python/ml/estimator_transformer_param_example.py
index 4993b5a..0fcae0e 100644
--- a/examples/src/main/python/ml/estimator_transformer_param_example.py
+++ b/examples/src/main/python/ml/estimator_transformer_param_example.py
@@ -26,7 +26,10 @@ from pyspark.ml.classification import LogisticRegression
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("EstimatorTransformerParamExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("EstimatorTransformerParamExample")\
+        .getOrCreate()
 
     # $example on$
     # Prepare training data from a list of (label, features) tuples.

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/gradient_boosted_tree_classifier_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/gradient_boosted_tree_classifier_example.py b/examples/src/main/python/ml/gradient_boosted_tree_classifier_example.py
index b09ad41..50026d7 100644
--- a/examples/src/main/python/ml/gradient_boosted_tree_classifier_example.py
+++ b/examples/src/main/python/ml/gradient_boosted_tree_classifier_example.py
@@ -29,7 +29,10 @@ from pyspark.ml.evaluation import MulticlassClassificationEvaluator
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("gradient_boosted_tree_classifier_example").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("gradient_boosted_tree_classifier_example")\
+        .getOrCreate()
 
     # $example on$
     # Load and parse the data file, converting it to a DataFrame.

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/gradient_boosted_tree_regressor_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/gradient_boosted_tree_regressor_example.py b/examples/src/main/python/ml/gradient_boosted_tree_regressor_example.py
index caa7cfc..5dd2272 100644
--- a/examples/src/main/python/ml/gradient_boosted_tree_regressor_example.py
+++ b/examples/src/main/python/ml/gradient_boosted_tree_regressor_example.py
@@ -29,7 +29,10 @@ from pyspark.ml.evaluation import RegressionEvaluator
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("gradient_boosted_tree_regressor_example").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("gradient_boosted_tree_regressor_example")\
+        .getOrCreate()
 
     # $example on$
     # Load and parse the data file, converting it to a DataFrame.

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/index_to_string_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/index_to_string_example.py b/examples/src/main/python/ml/index_to_string_example.py
index dd04b2c..523caac 100644
--- a/examples/src/main/python/ml/index_to_string_example.py
+++ b/examples/src/main/python/ml/index_to_string_example.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import IndexToString, StringIndexer
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("IndexToStringExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("IndexToStringExample")\
+        .getOrCreate()
 
     # $example on$
     df = spark.createDataFrame(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/kmeans_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/kmeans_example.py b/examples/src/main/python/ml/kmeans_example.py
index 7d9d80e..7382396 100644
--- a/examples/src/main/python/ml/kmeans_example.py
+++ b/examples/src/main/python/ml/kmeans_example.py
@@ -49,7 +49,10 @@ if __name__ == "__main__":
     path = sys.argv[1]
     k = sys.argv[2]
 
-    spark = SparkSession.builder.appName("PythonKMeansExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("PythonKMeansExample")\
+        .getOrCreate()
 
     lines = spark.read.text(path).rdd
     data = lines.map(parseVector)

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/linear_regression_with_elastic_net.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/linear_regression_with_elastic_net.py b/examples/src/main/python/ml/linear_regression_with_elastic_net.py
index 99b7f7f..620ab5b 100644
--- a/examples/src/main/python/ml/linear_regression_with_elastic_net.py
+++ b/examples/src/main/python/ml/linear_regression_with_elastic_net.py
@@ -23,7 +23,10 @@ from pyspark.ml.regression import LinearRegression
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("LinearRegressionWithElasticNet").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("LinearRegressionWithElasticNet")\
+        .getOrCreate()
 
     # $example on$
     # Load training data

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/logistic_regression_with_elastic_net.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/logistic_regression_with_elastic_net.py b/examples/src/main/python/ml/logistic_regression_with_elastic_net.py
index 0d7112e..33d0689 100644
--- a/examples/src/main/python/ml/logistic_regression_with_elastic_net.py
+++ b/examples/src/main/python/ml/logistic_regression_with_elastic_net.py
@@ -23,7 +23,10 @@ from pyspark.ml.classification import LogisticRegression
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("LogisticRegressionWithElasticNet").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("LogisticRegressionWithElasticNet")\
+        .getOrCreate()
 
     # $example on$
     # Load training data

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/max_abs_scaler_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/max_abs_scaler_example.py b/examples/src/main/python/ml/max_abs_scaler_example.py
index 1cb95a9..ab91198 100644
--- a/examples/src/main/python/ml/max_abs_scaler_example.py
+++ b/examples/src/main/python/ml/max_abs_scaler_example.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import MaxAbsScaler
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("MaxAbsScalerExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("MaxAbsScalerExample")\
+        .getOrCreate()
 
     # $example on$
     dataFrame = spark.read.format("libsvm").load("data/mllib/sample_libsvm_data.txt")

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/min_max_scaler_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/min_max_scaler_example.py b/examples/src/main/python/ml/min_max_scaler_example.py
index 8d91a59..e3e7bc2 100644
--- a/examples/src/main/python/ml/min_max_scaler_example.py
+++ b/examples/src/main/python/ml/min_max_scaler_example.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import MinMaxScaler
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("MinMaxScalerExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("MinMaxScalerExample")\
+        .getOrCreate()
 
     # $example on$
     dataFrame = spark.read.format("libsvm").load("data/mllib/sample_libsvm_data.txt")

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/n_gram_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/n_gram_example.py b/examples/src/main/python/ml/n_gram_example.py
index b7fecf0..9ac07f2 100644
--- a/examples/src/main/python/ml/n_gram_example.py
+++ b/examples/src/main/python/ml/n_gram_example.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import NGram
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("NGramExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("NGramExample")\
+        .getOrCreate()
 
     # $example on$
     wordDataFrame = spark.createDataFrame([

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/naive_bayes_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/naive_bayes_example.py b/examples/src/main/python/ml/naive_bayes_example.py
index e370355..89255a2 100644
--- a/examples/src/main/python/ml/naive_bayes_example.py
+++ b/examples/src/main/python/ml/naive_bayes_example.py
@@ -24,7 +24,10 @@ from pyspark.ml.evaluation import MulticlassClassificationEvaluator
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("naive_bayes_example").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("naive_bayes_example")\
+        .getOrCreate()
 
     # $example on$
     # Load training data

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/normalizer_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/normalizer_example.py b/examples/src/main/python/ml/normalizer_example.py
index ae25537..19012f5 100644
--- a/examples/src/main/python/ml/normalizer_example.py
+++ b/examples/src/main/python/ml/normalizer_example.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import Normalizer
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("NormalizerExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("NormalizerExample")\
+        .getOrCreate()
 
     # $example on$
     dataFrame = spark.read.format("libsvm").load("data/mllib/sample_libsvm_data.txt")

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/onehot_encoder_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/onehot_encoder_example.py b/examples/src/main/python/ml/onehot_encoder_example.py
index 9acc363..b9fceef 100644
--- a/examples/src/main/python/ml/onehot_encoder_example.py
+++ b/examples/src/main/python/ml/onehot_encoder_example.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import OneHotEncoder, StringIndexer
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("OneHotEncoderExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("OneHotEncoderExample")\
+        .getOrCreate()
 
     # $example on$
     df = spark.createDataFrame([

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/pca_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/pca_example.py b/examples/src/main/python/ml/pca_example.py
index adab151..f1b3cde 100644
--- a/examples/src/main/python/ml/pca_example.py
+++ b/examples/src/main/python/ml/pca_example.py
@@ -24,7 +24,10 @@ from pyspark.mllib.linalg import Vectors
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("PCAExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("PCAExample")\
+        .getOrCreate()
 
     # $example on$
     data = [(Vectors.sparse(5, [(1, 1.0), (3, 7.0)]),),

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/pipeline_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/pipeline_example.py b/examples/src/main/python/ml/pipeline_example.py
index ed9765d..bd10cfd 100644
--- a/examples/src/main/python/ml/pipeline_example.py
+++ b/examples/src/main/python/ml/pipeline_example.py
@@ -27,7 +27,10 @@ from pyspark.ml.feature import HashingTF, Tokenizer
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("PipelineExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("PipelineExample")\
+        .getOrCreate()
 
     # $example on$
     # Prepare training documents from a list of (id, text, label) tuples.

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/polynomial_expansion_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/polynomial_expansion_example.py b/examples/src/main/python/ml/polynomial_expansion_example.py
index 328b559..08882bc 100644
--- a/examples/src/main/python/ml/polynomial_expansion_example.py
+++ b/examples/src/main/python/ml/polynomial_expansion_example.py
@@ -24,7 +24,10 @@ from pyspark.mllib.linalg import Vectors
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("PolynomialExpansionExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("PolynomialExpansionExample")\
+        .getOrCreate()
 
     # $example on$
     df = spark\

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/random_forest_classifier_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/random_forest_classifier_example.py b/examples/src/main/python/ml/random_forest_classifier_example.py
index b0a93e0..c618eaf 100644
--- a/examples/src/main/python/ml/random_forest_classifier_example.py
+++ b/examples/src/main/python/ml/random_forest_classifier_example.py
@@ -29,7 +29,10 @@ from pyspark.ml.evaluation import MulticlassClassificationEvaluator
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("random_forest_classifier_example").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("random_forest_classifier_example")\
+        .getOrCreate()
 
     # $example on$
     # Load and parse the data file, converting it to a DataFrame.

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/random_forest_regressor_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/random_forest_regressor_example.py b/examples/src/main/python/ml/random_forest_regressor_example.py
index 4bb84f0..3a79373 100644
--- a/examples/src/main/python/ml/random_forest_regressor_example.py
+++ b/examples/src/main/python/ml/random_forest_regressor_example.py
@@ -29,7 +29,10 @@ from pyspark.ml.evaluation import RegressionEvaluator
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("random_forest_regressor_example").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("random_forest_regressor_example")\
+        .getOrCreate()
 
     # $example on$
     # Load and parse the data file, converting it to a DataFrame.

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/rformula_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/rformula_example.py b/examples/src/main/python/ml/rformula_example.py
index 45cc116..d5df3ce 100644
--- a/examples/src/main/python/ml/rformula_example.py
+++ b/examples/src/main/python/ml/rformula_example.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import RFormula
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("RFormulaExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("RFormulaExample")\
+        .getOrCreate()
 
     # $example on$
     dataset = spark.createDataFrame(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/simple_text_classification_pipeline.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/simple_text_classification_pipeline.py b/examples/src/main/python/ml/simple_text_classification_pipeline.py
index 3600c12..886f43c 100644
--- a/examples/src/main/python/ml/simple_text_classification_pipeline.py
+++ b/examples/src/main/python/ml/simple_text_classification_pipeline.py
@@ -33,7 +33,10 @@ pipeline in Python. Run with:
 
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("SimpleTextClassificationPipeline").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("SimpleTextClassificationPipeline")\
+        .getOrCreate()
 
     # Prepare training documents, which are labeled.
     training = spark.createDataFrame([

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/sql_transformer.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/sql_transformer.py b/examples/src/main/python/ml/sql_transformer.py
index 26045db..0bf8f35 100644
--- a/examples/src/main/python/ml/sql_transformer.py
+++ b/examples/src/main/python/ml/sql_transformer.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import SQLTransformer
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("SQLTransformerExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("SQLTransformerExample")\
+        .getOrCreate()
 
     # $example on$
     df = spark.createDataFrame([

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/standard_scaler_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/standard_scaler_example.py b/examples/src/main/python/ml/standard_scaler_example.py
index c50804f..c002748 100644
--- a/examples/src/main/python/ml/standard_scaler_example.py
+++ b/examples/src/main/python/ml/standard_scaler_example.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import StandardScaler
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("StandardScalerExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("StandardScalerExample")\
+        .getOrCreate()
 
     # $example on$
     dataFrame = spark.read.format("libsvm").load("data/mllib/sample_libsvm_data.txt")

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/stopwords_remover_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/stopwords_remover_example.py b/examples/src/main/python/ml/stopwords_remover_example.py
index 5736267..395fdef 100644
--- a/examples/src/main/python/ml/stopwords_remover_example.py
+++ b/examples/src/main/python/ml/stopwords_remover_example.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import StopWordsRemover
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("StopWordsRemoverExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("StopWordsRemoverExample")\
+        .getOrCreate()
 
     # $example on$
     sentenceData = spark.createDataFrame([

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/string_indexer_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/string_indexer_example.py b/examples/src/main/python/ml/string_indexer_example.py
index aacd4f9..a328e04 100644
--- a/examples/src/main/python/ml/string_indexer_example.py
+++ b/examples/src/main/python/ml/string_indexer_example.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import StringIndexer
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("StringIndexerExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("StringIndexerExample")\
+        .getOrCreate()
 
     # $example on$
     df = spark.createDataFrame(

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/tf_idf_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/tf_idf_example.py b/examples/src/main/python/ml/tf_idf_example.py
index 25df816..fb4ad99 100644
--- a/examples/src/main/python/ml/tf_idf_example.py
+++ b/examples/src/main/python/ml/tf_idf_example.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import HashingTF, IDF, Tokenizer
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("TfIdfExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("TfIdfExample")\
+        .getOrCreate()
 
     # $example on$
     sentenceData = spark.createDataFrame([

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/tokenizer_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/tokenizer_example.py b/examples/src/main/python/ml/tokenizer_example.py
index 5be4b4c..e61ec92 100644
--- a/examples/src/main/python/ml/tokenizer_example.py
+++ b/examples/src/main/python/ml/tokenizer_example.py
@@ -23,7 +23,10 @@ from pyspark.ml.feature import Tokenizer, RegexTokenizer
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("TokenizerExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("TokenizerExample")\
+        .getOrCreate()
 
     # $example on$
     sentenceDataFrame = spark.createDataFrame([

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/train_validation_split.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/train_validation_split.py b/examples/src/main/python/ml/train_validation_split.py
index 2e43a0f..5f5c52a 100644
--- a/examples/src/main/python/ml/train_validation_split.py
+++ b/examples/src/main/python/ml/train_validation_split.py
@@ -31,7 +31,10 @@ Run with:
 """
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("TrainValidationSplit").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("TrainValidationSplit")\
+        .getOrCreate()
     # $example on$
     # Prepare training and test data.
     data = spark.read.format("libsvm")\

http://git-wip-us.apache.org/repos/asf/spark/blob/8b4ab590/examples/src/main/python/ml/vector_assembler_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/ml/vector_assembler_example.py b/examples/src/main/python/ml/vector_assembler_example.py
index 019a9ea..b955ff0 100644
--- a/examples/src/main/python/ml/vector_assembler_example.py
+++ b/examples/src/main/python/ml/vector_assembler_example.py
@@ -24,7 +24,10 @@ from pyspark.ml.feature import VectorAssembler
 from pyspark.sql import SparkSession
 
 if __name__ == "__main__":
-    spark = SparkSession.builder.appName("VectorAssemblerExample").getOrCreate()
+    spark = SparkSession\
+        .builder\
+        .appName("VectorAssemblerExample")\
+        .getOrCreate()
 
     # $example on$
     dataset = spark.createDataFrame(


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


Mime
View raw message