hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rui Li (JIRA)" <>
Subject [jira] [Updated] (HIVE-15104) Hive on Spark generate more shuffle data than hive on mr
Date Fri, 12 May 2017 08:27:04 GMT


Rui Li updated HIVE-15104:
    Attachment: HIVE-15104.1.patch

Spark needs the hash code on reducer side for the groupBy shuffling. Since groupBy does no
ordering, reducer needs to put the shuffled data into a map to combine values by key, thus
needing the hash code. We just need to keep the hash code during SerDe if groupBy shuffle
is used.

Upload a PoC patch to demonstrate the idea. It disables kryo relocation which should not be

Also did simple test to see the improvement. The test is to run a query: {{select key, count
( * ) from A group by key order by key;}}, where A contains 40000000 records with 20 distinct
keys. The measurement is the number of bytes written during shuffle. I tested optimize HiveKey
alone, as well as optimize HiveKey and BytesWritable. We can see even for simple classes like
BytesWritable, the custom SerDe does better than a generic one.
|| ||Opt(N)||Opt(Y, Key)||Opt(Y, Key + Value)||

> Hive on Spark generate more shuffle data than hive on mr
> --------------------------------------------------------
>                 Key: HIVE-15104
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 1.2.1
>            Reporter: wangwenli
>            Assignee: Rui Li
>         Attachments: HIVE-15104.1.patch
> the same sql,  running on spark  and mr engine, will generate different size of shuffle
> i think it is because of hive on mr just serialize part of HiveKey, but hive on spark
which using kryo will serialize full of Hivekey object.  
> what is your opionion?

This message was sent by Atlassian JIRA

View raw message