spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mln...@apache.org
Subject spark git commit: [SPARK-13512][ML] add example and doc for MaxAbsScaler
Date Fri, 11 Mar 2016 07:31:49 GMT
Repository: spark
Updated Branches:
  refs/heads/master 6ca990fb3 -> 0b713e045


[SPARK-13512][ML] add example and doc for MaxAbsScaler

## What changes were proposed in this pull request?

jira: https://issues.apache.org/jira/browse/SPARK-13512
Add example and doc for ml.feature.MaxAbsScaler.

## How was this patch tested?
 unit tests

Author: Yuhao Yang <hhbyyh@gmail.com>

Closes #11392 from hhbyyh/maxabsdoc.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/0b713e04
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/0b713e04
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/0b713e04

Branch: refs/heads/master
Commit: 0b713e0455d01999d5a027ddc2ea8527eb085b34
Parents: 6ca990f
Author: Yuhao Yang <hhbyyh@gmail.com>
Authored: Fri Mar 11 09:31:35 2016 +0200
Committer: Nick Pentreath <nick.pentreath@gmail.com>
Committed: Fri Mar 11 09:31:35 2016 +0200

----------------------------------------------------------------------
 docs/ml-features.md                             | 32 ++++++++++++
 .../examples/ml/JavaMaxAbsScalerExample.java    | 52 ++++++++++++++++++++
 .../spark/examples/ml/MaxAbsScalerExample.scala | 49 ++++++++++++++++++
 3 files changed, 133 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/0b713e04/docs/ml-features.md
----------------------------------------------------------------------
diff --git a/docs/ml-features.md b/docs/ml-features.md
index 68d3ea2..4fe8eef 100644
--- a/docs/ml-features.md
+++ b/docs/ml-features.md
@@ -773,6 +773,38 @@ for more details on the API.
 </div>
 </div>
 
+
+## MaxAbsScaler
+
+`MaxAbsScaler` transforms a dataset of `Vector` rows, rescaling each feature to range [-1,
1] 
+by dividing through the maximum absolute value in each feature. It does not shift/center
the 
+data, and thus does not destroy any sparsity.
+
+`MaxAbsScaler` computes summary statistics on a data set and produces a `MaxAbsScalerModel`.
The 
+model can then transform each feature individually to range [-1, 1].
+
+The following example demonstrates how to load a dataset in libsvm format and then rescale
each feature to [-1, 1].
+
+<div class="codetabs">
+<div data-lang="scala" markdown="1">
+
+Refer to the [MaxAbsScaler Scala docs](api/scala/index.html#org.apache.spark.ml.feature.MaxAbsScaler)
+and the [MaxAbsScalerModel Scala docs](api/scala/index.html#org.apache.spark.ml.feature.MaxAbsScalerModel)
+for more details on the API.
+
+{% include_example scala/org/apache/spark/examples/ml/MaxAbsScalerExample.scala %}
+</div>
+
+<div data-lang="java" markdown="1">
+
+Refer to the [MaxAbsScaler Java docs](api/java/org/apache/spark/ml/feature/MaxAbsScaler.html)
+and the [MaxAbsScalerModel Java docs](api/java/org/apache/spark/ml/feature/MaxAbsScalerModel.html)
+for more details on the API.
+
+{% include_example java/org/apache/spark/examples/ml/JavaMaxAbsScalerExample.java %}
+</div>
+</div>
+
 ## Bucketizer
 
 `Bucketizer` transforms a column of continuous features to a column of feature buckets, where
the buckets are specified by users. It takes a parameter:

http://git-wip-us.apache.org/repos/asf/spark/blob/0b713e04/examples/src/main/java/org/apache/spark/examples/ml/JavaMaxAbsScalerExample.java
----------------------------------------------------------------------
diff --git a/examples/src/main/java/org/apache/spark/examples/ml/JavaMaxAbsScalerExample.java
b/examples/src/main/java/org/apache/spark/examples/ml/JavaMaxAbsScalerExample.java
new file mode 100644
index 0000000..b1e3b91
--- /dev/null
+++ b/examples/src/main/java/org/apache/spark/examples/ml/JavaMaxAbsScalerExample.java
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.examples.ml;
+
+import org.apache.spark.SparkConf;
+import org.apache.spark.api.java.JavaSparkContext;
+// $example on$
+import org.apache.spark.ml.feature.MaxAbsScaler;
+import org.apache.spark.ml.feature.MaxAbsScalerModel;
+import org.apache.spark.sql.DataFrame;
+// $example off$
+import org.apache.spark.sql.SQLContext;
+
+public class JavaMaxAbsScalerExample {
+
+  public static void main(String[] args) {
+    SparkConf conf = new SparkConf().setAppName("JavaMaxAbsScalerExample");
+    JavaSparkContext jsc = new JavaSparkContext(conf);
+    SQLContext jsql = new SQLContext(jsc);
+
+    // $example on$
+    DataFrame dataFrame = jsql.read().format("libsvm").load("data/mllib/sample_libsvm_data.txt");
+    MaxAbsScaler scaler = new MaxAbsScaler()
+        .setInputCol("features")
+        .setOutputCol("scaledFeatures");
+
+    // Compute summary statistics and generate MaxAbsScalerModel
+    MaxAbsScalerModel scalerModel = scaler.fit(dataFrame);
+
+    // rescale each feature to range [-1, 1].
+    DataFrame scaledData = scalerModel.transform(dataFrame);
+    scaledData.show();
+    // $example off$
+    jsc.stop();
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/spark/blob/0b713e04/examples/src/main/scala/org/apache/spark/examples/ml/MaxAbsScalerExample.scala
----------------------------------------------------------------------
diff --git a/examples/src/main/scala/org/apache/spark/examples/ml/MaxAbsScalerExample.scala
b/examples/src/main/scala/org/apache/spark/examples/ml/MaxAbsScalerExample.scala
new file mode 100644
index 0000000..aafb5ef
--- /dev/null
+++ b/examples/src/main/scala/org/apache/spark/examples/ml/MaxAbsScalerExample.scala
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+// scalastyle:off println
+package org.apache.spark.examples.ml
+
+import org.apache.spark.{SparkConf, SparkContext}
+// $example on$
+import org.apache.spark.ml.feature.MaxAbsScaler
+// $example off$
+import org.apache.spark.sql.SQLContext
+
+object MaxAbsScalerExample {
+  def main(args: Array[String]): Unit = {
+    val conf = new SparkConf().setAppName("MaxAbsScalerExample")
+    val sc = new SparkContext(conf)
+    val sqlContext = new SQLContext(sc)
+
+    // $example on$
+    val dataFrame = sqlContext.read.format("libsvm").load("data/mllib/sample_libsvm_data.txt")
+    val scaler = new MaxAbsScaler()
+      .setInputCol("features")
+      .setOutputCol("scaledFeatures")
+
+    // Compute summary statistics and generate MaxAbsScalerModel
+    val scalerModel = scaler.fit(dataFrame)
+
+    // rescale each feature to range [-1, 1]
+    val scaledData = scalerModel.transform(dataFrame)
+    scaledData.show()
+    // $example off$
+    sc.stop()
+  }
+}
+// scalastyle:on println


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


Mime
View raw message