spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From liancheng <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-5182] [SQL] Partitioning support for th...
Date Fri, 08 May 2015 15:01:37 GMT
Github user liancheng commented on a diff in the pull request:

    https://github.com/apache/spark/pull/5526#discussion_r29946653
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/sources/DataSourceStrategy.scala
---
    @@ -53,6 +58,51 @@ private[sql] object DataSourceStrategy extends Strategy {
             filters,
             (a, _) => t.buildScan(a)) :: Nil
     
    +    // Scanning partitioned FSBasedRelation
    +    case PhysicalOperation(projectList, filters, l @ LogicalRelation(t: FSBasedRelation))
    +        if t.partitionSpec.partitionColumns.nonEmpty =>
    +      val selectedPartitions = prunePartitions(filters, t.partitionSpec).toArray
    +
    +      logInfo {
    +        val total = t.partitionSpec.partitions.length
    +        val selected = selectedPartitions.length
    +        val percentPruned = (1 - total.toDouble / selected.toDouble) * 100
    +        s"Selected $selected partitions out of $total, pruned $percentPruned% partitions."
    +      }
    +
    +      // Only pushes down predicates that do not reference partition columns.
    +      val pushedFilters = {
    +        val partitionColumnNames = t.partitionSpec.partitionColumns.map(_.name).toSet
    +        filters.filter { f =>
    +          val referencedColumnNames = f.references.map(_.name).toSet
    +          referencedColumnNames.intersect(partitionColumnNames).isEmpty
    +        }
    +      }
    +
    +      buildPartitionedTableScan(
    +        l,
    +        projectList,
    +        pushedFilters,
    +        t.partitionSpec.partitionColumns,
    +        selectedPartitions) :: Nil
    +
    +    // Scanning non-partitioned FSBasedRelation
    +    case PhysicalOperation(projectList, filters, l @ LogicalRelation(t: FSBasedRelation))
=>
    +      val inputPaths = t.paths.map(new Path(_)).flatMap { path =>
    +        val fs = path.getFileSystem(t.sqlContext.sparkContext.hadoopConfiguration)
    +        val qualifiedPath = fs.makeQualified(path)
    --- End diff --
    
    Thanks for pointing this out. Add the reason here for future reference: for S3, the credential
part may contain `/`. `FileSystem.makeQualified` cannot process this case properly, while
`Path.makeQualified` can.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message