spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From andrewo...@apache.org
Subject spark git commit: [SPARK-15037][HOTFIX] Replace `sqlContext` and `sparkSession` with `spark`.
Date Tue, 10 May 2016 18:53:57 GMT
Repository: spark
Updated Branches:
  refs/heads/master cddb9da07 -> db3b4a201


[SPARK-15037][HOTFIX] Replace `sqlContext` and `sparkSession` with `spark`.

This replaces `sparkSession` with `spark` in CatalogSuite.scala.

Pass the Jenkins tests.

Author: Dongjoon Hyun <dongjoon@apache.org>

Closes #13030 from dongjoon-hyun/hotfix_sparkSession.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/db3b4a20
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/db3b4a20
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/db3b4a20

Branch: refs/heads/master
Commit: db3b4a20150ff7fb1caaf62ab3d2a2f1e632af36
Parents: cddb9da
Author: Dongjoon Hyun <dongjoon@apache.org>
Authored: Tue May 10 11:53:41 2016 -0700
Committer: Andrew Or <andrew@databricks.com>
Committed: Tue May 10 11:53:44 2016 -0700

----------------------------------------------------------------------
 .../scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/db3b4a20/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
----------------------------------------------------------------------
diff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
b/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
index 6dcc404..8b60802 100644
--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
@@ -536,7 +536,7 @@ class HiveDDLSuite
     withTable("t1") {
       withTempPath { dir =>
         val path = dir.getCanonicalPath
-        sqlContext.range(1).write.parquet(path)
+        spark.range(1).write.parquet(path)
         sql(s"CREATE TABLE t1 USING parquet OPTIONS (PATH '$path')")
 
         val desc = sql("DESC FORMATTED t1").collect().toSeq
@@ -548,7 +548,7 @@ class HiveDDLSuite
 
   test("desc table for data source table - partitioned bucketed table") {
     withTable("t1") {
-      sqlContext
+      spark
         .range(1).select('id as 'a, 'id as 'b, 'id as 'c, 'id as 'd).write
         .bucketBy(2, "b").sortBy("c").partitionBy("d")
         .saveAsTable("t1")


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


Mime
View raw message