spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From r...@apache.org
Subject git commit: Merge pull request #530 from aarondav/cleanup. Closes #530.
Date Mon, 03 Feb 2014 19:25:45 GMT
Updated Branches:
  refs/heads/master 0386f42e3 -> 1625d8c44


Merge pull request #530 from aarondav/cleanup. Closes #530.

Remove explicit conversion to PairRDDFunctions in cogroup()

As SparkContext._ is already imported, using the implicit conversion appears to make the code
much cleaner. Perhaps there was some sinister reason for doing the conversion explicitly,
however.

Author: Aaron Davidson <aaron@databricks.com>

== Merge branch commits ==

commit aa4a63f1bfd5b5178fe67364dd7ce4d84c357996
Author: Aaron Davidson <aaron@databricks.com>
Date:   Sun Feb 2 23:48:04 2014 -0800

    Remove explicit conversion to PairRDDFunctions in cogroup()

    As SparkContext._ is already imported, using the implicit conversion
    appears to make the code much cleaner. Perhaps there was some sinister
    reason for doing the converion explicitly, however.


Project: http://git-wip-us.apache.org/repos/asf/incubator-spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-spark/commit/1625d8c4
Tree: http://git-wip-us.apache.org/repos/asf/incubator-spark/tree/1625d8c4
Diff: http://git-wip-us.apache.org/repos/asf/incubator-spark/diff/1625d8c4

Branch: refs/heads/master
Commit: 1625d8c44693420de026138f3abecce2d12f895c
Parents: 0386f42
Author: Aaron Davidson <aaron@databricks.com>
Authored: Mon Feb 3 11:25:39 2014 -0800
Committer: Reynold Xin <rxin@apache.org>
Committed: Mon Feb 3 11:25:39 2014 -0800

----------------------------------------------------------------------
 .../scala/org/apache/spark/rdd/PairRDDFunctions.scala     | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/1625d8c4/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala b/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
index 4148581..3700614 100644
--- a/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
+++ b/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
@@ -458,8 +458,7 @@ class PairRDDFunctions[K: ClassTag, V: ClassTag](self: RDD[(K, V)])
       throw new SparkException("Default partitioner cannot partition array keys.")
     }
     val cg = new CoGroupedRDD[K](Seq(self, other), partitioner)
-    val prfs = new PairRDDFunctions[K, Seq[Seq[_]]](cg)(classTag[K], ClassTags.seqSeqClassTag)
-    prfs.mapValues { case Seq(vs, ws) =>
+    cg.mapValues { case Seq(vs, ws) =>
       (vs.asInstanceOf[Seq[V]], ws.asInstanceOf[Seq[W]])
     }
   }
@@ -474,8 +473,7 @@ class PairRDDFunctions[K: ClassTag, V: ClassTag](self: RDD[(K, V)])
       throw new SparkException("Default partitioner cannot partition array keys.")
     }
     val cg = new CoGroupedRDD[K](Seq(self, other1, other2), partitioner)
-    val prfs = new PairRDDFunctions[K, Seq[Seq[_]]](cg)(classTag[K], ClassTags.seqSeqClassTag)
-    prfs.mapValues { case Seq(vs, w1s, w2s) =>
+    cg.mapValues { case Seq(vs, w1s, w2s) =>
       (vs.asInstanceOf[Seq[V]], w1s.asInstanceOf[Seq[W1]], w2s.asInstanceOf[Seq[W2]])
     }
   }
@@ -749,7 +747,3 @@ class PairRDDFunctions[K: ClassTag, V: ClassTag](self: RDD[(K, V)])
 
   private[spark] def getValueClass() = implicitly[ClassTag[V]].runtimeClass
 }
-
-private[spark] object ClassTags {
-  val seqSeqClassTag = classTag[Seq[Seq[_]]]
-}


Mime
View raw message