spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From vgankidi <>
Subject [GitHub] spark pull request #19425: [SPARK-22196][Core] Combine multiple input splits...
Date Wed, 04 Oct 2017 07:03:40 GMT
GitHub user vgankidi opened a pull request:

    [SPARK-22196][Core] Combine multiple input splits into a HadoopPartition

    ## What changes were proposed in this pull request?
    Spark native read path allows tuning the partition size based on spark.sql.files.maxPartitionBytes
and spark.sql.files.openCostInBytes. It would be useful to add a similar functionality/behavior
to HadoopRDD, i.e, pack multiple input splits into a single partition based on maxPartitionBytes
and openCostInBytes. We have had several use-cases to merge small files by coalescing them
by size to reduce the number of tasks launched.
    ## How was this patch tested?
    Added a unit test. It was also tested manually in a few production jobs. 

You can merge this pull request into a Git repository by running:

    $ git pull SPARK-22196

Alternatively you can review and apply these changes as the patch at:

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #19425
commit 2f4e32681e50d4b42ed5b3d05d91e45483679bee
Author: Vinitha Gankidi <>
Date:   2017-10-04T06:36:56Z

    [SPARK-22196][Core] Combine multiple input splits into a HadoopPartition



To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message