spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From l...@apache.org
Subject spark git commit: [SQL] Use path.makeQualified in newParquet.
Date Sat, 04 Apr 2015 15:26:51 GMT
Repository: spark
Updated Branches:
  refs/heads/master 9b40c17ab -> da25c86d6


[SQL] Use path.makeQualified in newParquet.

Author: Yin Huai <yhuai@databricks.com>

Closes #5353 from yhuai/wrongFS and squashes the following commits:

849603b [Yin Huai] Not use deprecated method.
6d6ae34 [Yin Huai] Use path.makeQualified.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/da25c86d
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/da25c86d
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/da25c86d

Branch: refs/heads/master
Commit: da25c86d64ff9ce80f88186ba083f6c21dd9a568
Parents: 9b40c17
Author: Yin Huai <yhuai@databricks.com>
Authored: Sat Apr 4 23:26:10 2015 +0800
Committer: Cheng Lian <lian@databricks.com>
Committed: Sat Apr 4 23:26:10 2015 +0800

----------------------------------------------------------------------
 .../src/main/scala/org/apache/spark/sql/parquet/newParquet.scala  | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/da25c86d/sql/core/src/main/scala/org/apache/spark/sql/parquet/newParquet.scala
----------------------------------------------------------------------
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/parquet/newParquet.scala b/sql/core/src/main/scala/org/apache/spark/sql/parquet/newParquet.scala
index 583bac4..0dce362 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/parquet/newParquet.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/parquet/newParquet.scala
@@ -268,7 +268,8 @@ private[sql] case class ParquetRelation2(
       // containing Parquet files (e.g. partitioned Parquet table).
       val baseStatuses = paths.distinct.map { p =>
         val fs = FileSystem.get(URI.create(p), sparkContext.hadoopConfiguration)
-        val qualified = fs.makeQualified(new Path(p))
+        val path = new Path(p)
+        val qualified = path.makeQualified(fs.getUri, fs.getWorkingDirectory)
 
         if (!fs.exists(qualified) && maybeSchema.isDefined) {
           fs.mkdirs(qualified)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


Mime
View raw message