spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Herman van Hovell (JIRA)" <>
Subject [jira] [Commented] (SPARK-8682) Range Join for Spark SQL
Date Mon, 14 Mar 2016 15:22:33 GMT


Herman van Hovell commented on SPARK-8682:

I have recently updated the PR (which is a broadcast range join). It'll need a rebase though.

The thing is that this doesn't need to be in sql to work. We can use {{ExperimentalMethods.extraStrategies}}
to hook this into the planner.

> Range Join for Spark SQL
> ------------------------
>                 Key: SPARK-8682
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>            Reporter: Herman van Hovell
>         Attachments: perf_testing.scala
> Currently Spark SQL uses a Broadcast Nested Loop join (or a filtered Cartesian Join)
when it has to execute the following range query:
> {noformat}
>        B.*
> FROM   tableA A
>        JOIN tableB B
>         ON A.start <= B.end
>          AND A.end > B.start
> {noformat}
> This is horribly inefficient. The performance of this query can be greatly improved,
when one of the tables can be broadcasted, by creating a range index. A range index is basically
a sorted map containing the rows of the smaller table, indexed by both the high and low keys.
using this structure the complexity of the query would go from O(N * M) to O(N * 2 * LOG(M)),
N = number of records in the larger table, M = number of records in the smaller (indexed)
> I have created a pull request for this. According to the [Spark SQL: Relational Data
Processing in Spark|] paper
similar work (page 11, section 7.2) has already been done by the ADAM project (cannot locate
the code though). 
> Any comments and/or feedback are greatly appreciated.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message