hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xuefu Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-10458) Enable parallel order by for spark [Spark Branch]
Date Tue, 12 May 2015 14:24:00 GMT

    [ https://issues.apache.org/jira/browse/HIVE-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539875#comment-14539875
] 

Xuefu Zhang commented on HIVE-10458:
------------------------------------

Hi [~lirui], it doesn't seem making sense to have parallel order and then reduce again with
one reducer. Thus, disabling parallel order for order by + limit seems better.

As a side question, I remember that you mentioned that parallel order doesn't help much on
performance. Could you quantify that? If so, maybe we shouldn't consider parallel order at
all.

> Enable parallel order by for spark [Spark Branch]
> -------------------------------------------------
>
>                 Key: HIVE-10458
>                 URL: https://issues.apache.org/jira/browse/HIVE-10458
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Rui Li
>            Assignee: Rui Li
>         Attachments: HIVE-10458.1-spark.patch
>
>
> We don't have to force reducer# to 1 as spark supports parallel sorting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message