spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matthias Wolf (JIRA)" <>
Subject [jira] [Commented] (SPARK-23997) Configurable max number of buckets
Date Thu, 19 Jul 2018 08:14:00 GMT


Matthias Wolf commented on SPARK-23997:

Is there any issue with having this limit configurable? We save large amounts of data in tables
to avoid unnecessary shuffles, and often get hit by this bucket limitation.

> Configurable max number of buckets
> ----------------------------------
>                 Key: SPARK-23997
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Input/Output, SQL
>    Affects Versions: 2.2.1, 2.3.0
>            Reporter: Fernando Pereira
>            Priority: Major
> When exporting data as a table the user can choose to split data in buckets by choosing
the columns and the number of buckets. Currently there is a hard-coded limit of 99'999 buckets.
> However, for heavy workloads this limit might be too restrictive, a situation that will
eventually become more common as workloads grow.
> As per the comments in SPARK-19618 this limit could be made configurable.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message