spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From dav...@apache.org
Subject spark git commit: [SPARK-15433] [PYSPARK] PySpark core test should not use SerDe from PythonMLLibAPI
Date Tue, 24 May 2016 17:10:58 GMT
Repository: spark
Updated Branches:
  refs/heads/master f8763b80e -> 695d9a0fd


[SPARK-15433] [PYSPARK] PySpark core test should not use SerDe from PythonMLLibAPI

## What changes were proposed in this pull request?

Currently PySpark core test uses the `SerDe` from `PythonMLLibAPI` which includes many MLlib
things. It should use `SerDeUtil` instead.

## How was this patch tested?
Existing tests.

Author: Liang-Chi Hsieh <simonh@tw.ibm.com>

Closes #13214 from viirya/pycore-use-serdeutil.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/695d9a0f
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/695d9a0f
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/695d9a0f

Branch: refs/heads/master
Commit: 695d9a0fd461070ee2684b2210fb69d0b6ed1a95
Parents: f8763b8
Author: Liang-Chi Hsieh <simonh@tw.ibm.com>
Authored: Tue May 24 10:10:41 2016 -0700
Committer: Davies Liu <davies.liu@gmail.com>
Committed: Tue May 24 10:10:41 2016 -0700

----------------------------------------------------------------------
 core/src/main/scala/org/apache/spark/api/python/SerDeUtil.scala | 2 +-
 python/pyspark/tests.py                                         | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/695d9a0f/core/src/main/scala/org/apache/spark/api/python/SerDeUtil.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/org/apache/spark/api/python/SerDeUtil.scala b/core/src/main/scala/org/apache/spark/api/python/SerDeUtil.scala
index 1c632eb..6e4eab4 100644
--- a/core/src/main/scala/org/apache/spark/api/python/SerDeUtil.scala
+++ b/core/src/main/scala/org/apache/spark/api/python/SerDeUtil.scala
@@ -137,7 +137,7 @@ private[spark] object SerDeUtil extends Logging {
    * Convert an RDD of Java objects to an RDD of serialized Python objects, that is usable
by
    * PySpark.
    */
-  private[spark] def javaToPython(jRDD: JavaRDD[_]): JavaRDD[Array[Byte]] = {
+  def javaToPython(jRDD: JavaRDD[_]): JavaRDD[Array[Byte]] = {
     jRDD.rdd.mapPartitions { iter => new AutoBatchedPickler(iter) }
   }
 

http://git-wip-us.apache.org/repos/asf/spark/blob/695d9a0f/python/pyspark/tests.py
----------------------------------------------------------------------
diff --git a/python/pyspark/tests.py b/python/pyspark/tests.py
index 97ea39d..222c5ca 100644
--- a/python/pyspark/tests.py
+++ b/python/pyspark/tests.py
@@ -960,13 +960,13 @@ class RDDTests(ReusedPySparkTestCase):
         ]
         data_rdd = self.sc.parallelize(data)
         data_java_rdd = data_rdd._to_java_object_rdd()
-        data_python_rdd = self.sc._jvm.SerDe.javaToPython(data_java_rdd)
+        data_python_rdd = self.sc._jvm.SerDeUtil.javaToPython(data_java_rdd)
         converted_rdd = RDD(data_python_rdd, self.sc)
         self.assertEqual(2, converted_rdd.count())
 
         # conversion between python and java RDD threw exceptions
         data_java_rdd = converted_rdd._to_java_object_rdd()
-        data_python_rdd = self.sc._jvm.SerDe.javaToPython(data_java_rdd)
+        data_python_rdd = self.sc._jvm.SerDeUtil.javaToPython(data_java_rdd)
         converted_rdd = RDD(data_python_rdd, self.sc)
         self.assertEqual(2, converted_rdd.count())
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


Mime
View raw message