hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chao (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-7503) Support Hive's multi-table insert query with Spark [Spark Branch]
Date Mon, 08 Sep 2014 16:48:28 GMT

    [ https://issues.apache.org/jira/browse/HIVE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14125736#comment-14125736
] 

Chao commented on HIVE-7503:
----------------------------

All the failed diffs are caused by either duplicated filesinks or multi-table insertion.
I should probably start updating the related .q files for the latter.

> Support Hive's multi-table insert query with Spark [Spark Branch]
> -----------------------------------------------------------------
>
>                 Key: HIVE-7503
>                 URL: https://issues.apache.org/jira/browse/HIVE-7503
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Xuefu Zhang
>            Assignee: Chao
>              Labels: spark-m1
>         Attachments: HIVE-7503.1-spark.patch, HIVE-7503.2-spark.patch, HIVE-7503.3-spark.patch,
HIVE-7503.4-spark.patch
>
>
> For Hive's multi insert query (https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML),
there may be an MR job for each insert.  When we achieve this with Spark, it would be nice
if all the inserts can happen concurrently.
> It seems that this functionality isn't available in Spark. To make things worse, the
source of the insert may be re-computed unless it's staged. Even with this, the inserts will
happen sequentially, making the performance suffer.
> This task is to find out what takes in Spark to enable this without requiring staging
the source and sequential insertion. If this has to be solved in Hive, find out an optimum
way to do this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message