hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rui Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-7526) Research to use groupby transformation to replace Hive existing partitionByKey and SparkCollector combination
Date Mon, 04 Aug 2014 09:08:12 GMT

    [ https://issues.apache.org/jira/browse/HIVE-7526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14084450#comment-14084450
] 

Rui Li commented on HIVE-7526:
------------------------------

Hi [~xuefuz] [~csun], it seems in SparkShuffler, we lost the # of partitions when applying
the shuffle transformations. It may be useful if user can specify it (e.g. HIVE-7540). Should
we add that to the "shuffle" method?

> Research to use groupby transformation to replace Hive existing partitionByKey and SparkCollector
combination
> -------------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-7526
>                 URL: https://issues.apache.org/jira/browse/HIVE-7526
>             Project: Hive
>          Issue Type: Task
>          Components: Spark
>            Reporter: Xuefu Zhang
>            Assignee: Chao
>             Fix For: spark-branch
>
>         Attachments: HIVE-7526.2.patch, HIVE-7526.3.patch, HIVE-7526.4-spark.patch, HIVE-7526.5-spark.patch,
HIVE-7526.patch
>
>
> Currently SparkClient shuffles data by calling paritionByKey(). This transformation outputs
<key, value> tuples. However, Hive's ExecMapper expects <key, iterator<value>>
tuples, and Spark's groupByKey() seems outputing this directly. Thus, using groupByKey, we
may be able to avoid its own key clustering mechanism (in HiveReduceFunction). This research
is to have a try.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message