hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sergey Shelukhin (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-6430) MapJoin hash table has large memory overhead
Date Sat, 08 Mar 2014 02:27:42 GMT

    [ https://issues.apache.org/jira/browse/HIVE-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13924668#comment-13924668
] 

Sergey Shelukhin commented on HIVE-6430:
----------------------------------------

all tez tests passed, some explain plans changed in details that should be unrelated (like
column names), and ordering changed in one file.
I will see if trunk files need to be updated again, and/or if ordering needs to be enforced

> MapJoin hash table has large memory overhead
> --------------------------------------------
>
>                 Key: HIVE-6430
>                 URL: https://issues.apache.org/jira/browse/HIVE-6430
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Sergey Shelukhin
>            Assignee: Sergey Shelukhin
>         Attachments: HIVE-6430.patch
>
>
> Right now, in some queries, I see that storing e.g. 4 ints (2 for key and 2 for row)
can take several hundred bytes, which is ridiculous. I am reducing the size of MJKey and MJRowContainer
in other jiras, but in general we don't need to have java hash table there.  We can either
use primitive-friendly hashtable like the one from HPPC (Apache-licenced), or some variation,
to map primitive keys to single row storage structure without an object per row (similar to
vectorization).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message