spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From gengliangwang <...@git.apache.org>
Subject [GitHub] spark pull request #21004: [SPARK-23896][SQL]Improve PartitioningAwareFileIn...
Date Thu, 12 Apr 2018 09:51:25 GMT
Github user gengliangwang commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21004#discussion_r181025558
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
    @@ -384,13 +356,9 @@ case class DataSource(
     
           // This is a non-streaming file based datasource.
           case (format: FileFormat, _) =>
    -        val allPaths = caseInsensitiveOptions.get("path") ++ paths
    -        val hadoopConf = sparkSession.sessionState.newHadoopConf()
    -        val globbedPaths = allPaths.flatMap(
    -          DataSource.checkAndGlobPathIfNecessary(hadoopConf, _, checkFilesExist)).toArray
    -
    -        val fileStatusCache = FileStatusCache.getOrCreate(sparkSession)
    -        val (dataSchema, partitionSchema) = getOrInferFileFormatSchema(format, fileStatusCache)
    +        checkAndGlobPathIfNecessary(checkEmptyGlobPath = true, checkFilesExist = checkFilesExist)
    --- End diff --
    
    Yes. Originally it glob twice too.  I don't have a good solution to avoid this.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message