spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ran Haim (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SPARK-17436) dataframe.write sometimes does not keep sorting
Date Sun, 20 Nov 2016 11:31:58 GMT

     [ https://issues.apache.org/jira/browse/SPARK-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Ran Haim updated SPARK-17436:
-----------------------------
    Description: 
*** update****
It seems that in spark 2.0 code, the sorting issue is resolved.
The sorter does consider inner sorting in the sorting key - but I think it will be faster
to just insert the rows to a list in a hash map.
***************

When using partition by,  datawriter can sometimes mess up an ordered dataframe.

The problem originates in org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.
In the writeRows method when too many files are opened (configurable), it starts inserting
rows to UnsafeKVExternalSorter, then it reads all the rows again from the sorter and writes
them to the corresponding files.
The problem is that the sorter actually sorts the rows using the partition key, and that can
sometimes mess up the original sort (or secondary sort if you will).

I think the best way to fix it is to stop using a sorter, and just put the rows in a map using
key as partition key and value as an arraylist, and then just walk through all the keys and
write it in the original order - this will probably be faster as there no need for ordering.



  was:
When using partition by,  datawriter can sometimes mess up an ordered dataframe.

The problem originates in org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.
In the writeRows method when too many files are opened (configurable), it starts inserting
rows to UnsafeKVExternalSorter, then it reads all the rows again from the sorter and writes
them to the corresponding files.
The problem is that the sorter actually sorts the rows using the partition key, and that can
sometimes mess up the original sort (or secondary sort if you will).

I think the best way to fix it is to stop using a sorter, and just put the rows in a map using
key as partition key and value as an arraylist, and then just walk through all the keys and
write it in the original order - this will probably be faster as there no need for ordering.




> dataframe.write sometimes does not keep sorting
> -----------------------------------------------
>
>                 Key: SPARK-17436
>                 URL: https://issues.apache.org/jira/browse/SPARK-17436
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 1.6.1, 1.6.2, 2.0.0
>            Reporter: Ran Haim
>            Priority: Minor
>
> *** update****
> It seems that in spark 2.0 code, the sorting issue is resolved.
> The sorter does consider inner sorting in the sorting key - but I think it will be faster
to just insert the rows to a list in a hash map.
> ***************
> When using partition by,  datawriter can sometimes mess up an ordered dataframe.
> The problem originates in org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.
> In the writeRows method when too many files are opened (configurable), it starts inserting
rows to UnsafeKVExternalSorter, then it reads all the rows again from the sorter and writes
them to the corresponding files.
> The problem is that the sorter actually sorts the rows using the partition key, and that
can sometimes mess up the original sort (or secondary sort if you will).
> I think the best way to fix it is to stop using a sorter, and just put the rows in a
map using key as partition key and value as an arraylist, and then just walk through all the
keys and write it in the original order - this will probably be faster as there no need for
ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message