spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thincrs (JIRA)" <>
Subject [jira] [Commented] (SPARK-26164) [SQL] Allow FileFormatWriter to write multiple partitions/buckets without sort
Date Thu, 07 Feb 2019 19:17:01 GMT


Thincrs commented on SPARK-26164:

A user of thincrs has selected this issue. Deadline: Thu, Feb 14, 2019 7:16 PM

> [SQL] Allow FileFormatWriter to write multiple partitions/buckets without sort
> ------------------------------------------------------------------------------
>                 Key: SPARK-26164
>                 URL:
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>    Affects Versions: 2.4.0
>            Reporter: Cheng Su
>            Priority: Minor
> Problem:
> Current spark always requires a local sort before writing to output table on partition/bucket
columns [1]. The disadvantage is the sort might waste reserved CPU time on executor due to
spill. Hive does not require the local sort before writing output table [2], and we saw performance
regression when migrating hive workload to spark.
> Proposal:
> We can avoid the local sort by keeping the mapping between file path and output writer.
In case of writing row to a new file path, we create a new output writer. Otherwise, re-use
the same output writer if the writer already exists (mainly change should be in FileFormatDataWriter.scala).
This is very similar to what hive does in [2].
> Given the new behavior (i.e. avoid sort by keeping multiple output writer) consumes more
memory on executor (multiple output writer needs to be opened in same time), than the current
behavior (i.e. only one output writer opened). We can add the config to switch between the
current and new behavior.
> [1]: spark FileFormatWriter.scala - []
> [2]: hive - []

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message