spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yanbo Liang (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (SPARK-9265) Dataframe.limit joined with another dataframe can be non-deterministic
Date Mon, 26 Oct 2015 07:00:34 GMT

    [ https://issues.apache.org/jira/browse/SPARK-9265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14973829#comment-14973829
] 

Yanbo Liang edited comment on SPARK-9265 at 10/26/15 6:59 AM:
--------------------------------------------------------------

[~tdas] [~andrewor14] [~rxin] Could you tell me how did you generate the table? Is it a Spark
SQL temporary table or Hive table? I use external datasource to load a test table but can
not reproduce this bug.
{code}
val df = sqlContext.read.json("examples/src/main/resources/failed_suites.json")
val recentFailures = df.cache()
val topRecentFailures = recentFailures.groupBy('suiteName).agg(count("*").as('failCount)).orderBy('failCount.desc).limit(10)
val mot = topRecentFailures.as("a").join(recentFailures.as("b"), $"a.suiteName" === $"b.suiteName")
(1 to 10).foreach { i => 
  println(s"$i: " + mot.count())
}
1: 1107                                                                         
2: 1107
3: 1107
4: 1107
5: 1107
6: 1107
7: 1107
8: 1107
9: 1107
10: 1107
{code}


was (Author: yanboliang):
@Tathagata Das @Andrew Or [~rxin] Could you tell me how did you generate the table? It's a
Spark SQL temporary table or Hive table? I use external datasource to load a test table but
can not reproduce this bug.
{code:scala}
val df = sqlContext.read.json("examples/src/main/resources/failed_suites.json")
{code}

> Dataframe.limit joined with another dataframe can be non-deterministic
> ----------------------------------------------------------------------
>
>                 Key: SPARK-9265
>                 URL: https://issues.apache.org/jira/browse/SPARK-9265
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.4.1
>            Reporter: Tathagata Das
>            Priority: Critical
>
> {code}
> import org.apache.spark.sql._
> import org.apache.spark.sql.functions._
> val recentFailures = table("failed_suites").cache()
> val topRecentFailures = recentFailures.groupBy('suiteName).agg(count("*").as('failCount)).orderBy('failCount.desc).limit(10)
> topRecentFailures.show(100)
> val mot = topRecentFailures.as("a").join(recentFailures.as("b"), $"a.suiteName" === $"b.suiteName")
>   
> (1 to 10).foreach { i => 
>   println(s"$i: " + mot.count())
> }
> {code}
> This shows.
> {code}
> +--------------------+---------+
> |           suiteName|failCount|
> +--------------------+---------+
> |org.apache.spark....|       85|
> |org.apache.spark....|       26|
> |org.apache.spark....|       26|
> |org.apache.spark....|       17|
> |org.apache.spark....|       17|
> |org.apache.spark....|       15|
> |org.apache.spark....|       13|
> |org.apache.spark....|       13|
> |org.apache.spark....|       11|
> |org.apache.spark....|        9|
> +--------------------+---------+
> 1: 174
> 2: 166
> 3: 174
> 4: 106
> 5: 158
> 6: 110
> 7: 174
> 8: 158
> 9: 166
> 10: 106
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message