spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matei Zaharia (JIRA)" <>
Subject [jira] [Commented] (SPARK-3032) Potential bug when running sort-based shuffle with sorting using TimSort
Date Tue, 23 Sep 2014 03:00:35 GMT


Matei Zaharia commented on SPARK-3032:

I'm not completely sure that this is because hashCode provides a partial ordering, because
I believe TimSort is supposed to work on partial orderings as well. I believe the problem
is an integer over flow when we subtract key1.hashCode - key2.hashCode. Can you try replacing
the line that returns h1 - h2 in keyComparator with returning, h2)? This
will properly deal with overflow.

Returning h1 - h2 is definitely wrong: for example suppose that h1 = Int.MaxValue and h2 =
Int.MinValue, then h1 - h2 = -1.

Please add a unit test for this case as well.

> Potential bug when running sort-based shuffle with sorting using TimSort
> ------------------------------------------------------------------------
>                 Key: SPARK-3032
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Shuffle
>    Affects Versions: 1.1.0
>            Reporter: Saisai Shao
>            Priority: Critical
> When using SparkPerf's aggregate-by-key workload to test sort-based shuffle, data type
for key and value is (String, String), always meet this issue:
> {noformat}
> java.lang.IllegalArgumentException: Comparison method violates its general contract!
>         at org.apache.spark.util.collection.Sorter$SortState.mergeLo(
>         at org.apache.spark.util.collection.Sorter$SortState.mergeAt(
>         at org.apache.spark.util.collection.Sorter$SortState.mergeCollapse(
>         at org.apache.spark.util.collection.Sorter$SortState.access$200(
>         at org.apache.spark.util.collection.Sorter.sort(
>         at org.apache.spark.util.collection.SizeTrackingPairBuffer.destructiveSortedIterator(SizeTrackingPairBuffer.scala:83)
>         at org.apache.spark.util.collection.ExternalSorter.spillToMergeableFile(ExternalSorter.scala:323)
>         at org.apache.spark.util.collection.ExternalSorter.spill(ExternalSorter.scala:271)
>         at org.apache.spark.util.collection.ExternalSorter.maybeSpill(ExternalSorter.scala:249)
>         at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:220)
>         at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:85)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>         at
>         at org.apache.spark.executor.Executor$
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
>         at java.util.concurrent.ThreadPoolExecutor$
>         at
> {noformat}
> Seems the current partitionKeyComparator which use hashcode of String as key comparator
break some sorting contracts. 
> Also I tested using data type Int as key, this is OK to pass the test, since hashcode
of Int is its self. So I think potentially partitionDiff + hashcode of String may break the
sorting contracts.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message