hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Navis (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HIVE-1035) limit can be optimized if the limit is happening on the reducer
Date Fri, 15 Feb 2013 06:27:15 GMT

     [ https://issues.apache.org/jira/browse/HIVE-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Navis resolved HIVE-1035.
-------------------------

    Resolution: Duplicate

Applied option-1 in HIVE-3550
                
> limit can be optimized if the limit is happening on the reducer
> ---------------------------------------------------------------
>
>                 Key: HIVE-1035
>                 URL: https://issues.apache.org/jira/browse/HIVE-1035
>             Project: Hive
>          Issue Type: Bug
>          Components: Query Processor
>            Reporter: Namit Jain
>
> A query like:
> select ... from A join B..  limit 10;
> where the limit is performed on the reducer can be further optimized.
> Currently, all the operators on the reduce side will be done, but the ExecReducer will
un-necessarily deserialize all the rows.
> The following optimizations can be done:
> 1. Do nothing in reduce() in ExecReducer.
> 2. Modify map-reduce framework so that it does not even invoke the reduce() method in
ExecReducer.
> 2. may require some work from hadoop - but we should minimally do 1. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message