spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Herman van Hovell (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (SPARK-8682) Range Join for Spark SQL
Date Thu, 16 Jul 2015 22:31:04 GMT

    [ https://issues.apache.org/jira/browse/SPARK-8682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14630449#comment-14630449
] 

Herman van Hovell edited comment on SPARK-8682 at 7/16/15 10:31 PM:
--------------------------------------------------------------------

I have attached some performance testing code.

In this setup RangeJoin is 13-50 times faster than the Cartesian/Filter combination. However
the performance profile is a bit unexpected. The fewer records in the broadcasted, side the
faster it is. This is opposite to my expectations, because RangeJoin should have a bigger
advantage when the number of broadcasted rows are larger. I am looking into this.


was (Author: hvanhovell):
Some Performance Testing code.

> Range Join for Spark SQL
> ------------------------
>
>                 Key: SPARK-8682
>                 URL: https://issues.apache.org/jira/browse/SPARK-8682
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>            Reporter: Herman van Hovell
>         Attachments: perf_testing.scala
>
>
> Currently Spark SQL uses a Broadcast Nested Loop join (or a filtered Cartesian Join)
when it has to execute the following range query:
> {noformat}
> SELECT A.*,
>        B.*
> FROM   tableA A
>        JOIN tableB B
>         ON A.start <= B.end
>          AND A.end > B.start
> {noformat}
> This is horribly inefficient. The performance of this query can be greatly improved,
when one of the tables can be broadcasted, by creating a range index. A range index is basically
a sorted map containing the rows of the smaller table, indexed by both the high and low keys.
using this structure the complexity of the query would go from O(N * M) to O(N * 2 * LOG(M)),
N = number of records in the larger table, M = number of records in the smaller (indexed)
table.
> I have created a pull request for this. According to the [Spark SQL: Relational Data
Processing in Spark|http://people.csail.mit.edu/matei/papers/2015/sigmod_spark_sql.pdf] paper
similar work (page 11, section 7.2) has already been done by the ADAM project (cannot locate
the code though). 
> Any comments and/or feedback are greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message