hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Phabricator (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HIVE-4068) Size of aggregation buffer which uses non-primitive type is not estimated correctly
Date Sun, 24 Feb 2013 07:26:13 GMT

     [ https://issues.apache.org/jira/browse/HIVE-4068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Phabricator updated HIVE-4068:
------------------------------

    Attachment: HIVE-4068.D8859.1.patch

navis requested code review of "HIVE-4068 [jira] Size of aggregation buffer which uses non-primitive
type is not estimated correctly".

Reviewers: JIRA

HIVE-4068 Size of aggregation buffer which uses non-primitive type is not estimated correctly

Currently, hive assumes an aggregation buffer which holds a map is occupying just 256 byte
(fixed). If it's bigger than that in real, OutOfMemoryError can be thrown (especially for
>1k buffer).

workaround : set hive.map.aggr.hash.percentmemory=<smaller value than default(0.5)>

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D8859

AFFECTED FILES
  ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFEvaluator.java

MANAGE HERALD RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/21519/

To: JIRA, navis

                
> Size of aggregation buffer which uses non-primitive type is not estimated correctly
> -----------------------------------------------------------------------------------
>
>                 Key: HIVE-4068
>                 URL: https://issues.apache.org/jira/browse/HIVE-4068
>             Project: Hive
>          Issue Type: Improvement
>          Components: Query Processor
>            Reporter: Navis
>            Assignee: Navis
>            Priority: Minor
>         Attachments: HIVE-4068.D8859.1.patch
>
>
> Currently, hive assumes an aggregation buffer which holds a map is occupying just 256
byte (fixed). If it's bigger than that in real, OutOfMemoryError can be thrown (especially
for >1k buffer). 
> workaround : set hive.map.aggr.hash.percentmemory=<smaller value than default(0.5)>

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message