hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rui Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-10458) Enable parallel order by for spark [Spark Branch]
Date Tue, 12 May 2015 05:56:01 GMT

    [ https://issues.apache.org/jira/browse/HIVE-10458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539299#comment-14539299
] 

Rui Li commented on HIVE-10458:
-------------------------------

Hi [~xuefuz], I've looked at some of the failures. Most of them is due to how we handle {{order
by + limit}} queries. When order by is followed by a limit, hive assumes that there's only
1 reducer. So we won't create an extra shuffle to do the limit (otherwise there'll be an extra
shuffle with only 1 reducer for the limit clause).
To solve this, we can either disable parallel order by when there's a limit, or we can force
an extra shuffle for the limit when running on spark. What's your opinion?

> Enable parallel order by for spark [Spark Branch]
> -------------------------------------------------
>
>                 Key: HIVE-10458
>                 URL: https://issues.apache.org/jira/browse/HIVE-10458
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Rui Li
>            Assignee: Rui Li
>         Attachments: HIVE-10458.1-spark.patch
>
>
> We don't have to force reducer# to 1 as spark supports parallel sorting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message