spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Achuth17 <...@git.apache.org>
Subject [GitHub] spark pull request #21608: [SPARK-24626] [SQL] Improve location size calcula...
Date Fri, 06 Jul 2018 14:15:21 GMT
Github user Achuth17 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21608#discussion_r200666542
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/command/CommandUtils.scala
---
    @@ -47,15 +48,26 @@ object CommandUtils extends Logging {
         }
       }
     
    -  def calculateTotalSize(sessionState: SessionState, catalogTable: CatalogTable): BigInt
= {
    +    def calculateTotalSize(spark: SparkSession, catalogTable: CatalogTable): BigInt =
{
    +
    +    val sessionState = spark.sessionState
    +    val stagingDir = sessionState.conf.getConfString("hive.exec.stagingdir", ".hive-staging")
    +
         if (catalogTable.partitionColumnNames.isEmpty) {
    -      calculateLocationSize(sessionState, catalogTable.identifier, catalogTable.storage.locationUri)
    +      calculateLocationSize(sessionState, catalogTable.identifier,
    +          catalogTable.storage.locationUri)
         } else {
           // Calculate table size as a sum of the visible partitions. See SPARK-21079
           val partitions = sessionState.catalog.listPartitions(catalogTable.identifier)
    -      partitions.map { p =>
    -        calculateLocationSize(sessionState, catalogTable.identifier, p.storage.locationUri)
    -      }.sum
    +      val paths = partitions.map(x => new Path(x.storage.locationUri.get.getPath))
    +      val pathFilter = new PathFilter {
    +        override def accept(path: Path): Boolean = {
    +          !path.getName.startsWith(stagingDir)
    +        }
    +      }
    +      val fileStatusSeq = InMemoryFileIndex.bulkListLeafFiles(paths,
    +        sessionState.newHadoopConf(), pathFilter, spark).flatMap(x => x._2)
    --- End diff --
    
    The above approach might not work too. In the earlier implementation there was a check
from recursively listing files from certain directories (`stagingDir`) and having a pathFilter
might not be the right approach. 
    
    So I wanted to introduce a list of strings called `filterDir` as a new parameter to `bulkListLeafFiles`
which can be used to check if a particular directory can be recursed further. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message