hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gopal V (JIRA)" <>
Subject [jira] [Commented] (HIVE-3997) Use distributed cache to cache/localize dimension table & filter it in map task setup
Date Wed, 13 Feb 2013 20:40:13 GMT


Gopal V commented on HIVE-3997:

Tested with 6x12 slots for the tasks, the results remain relatively the same

With the old code, it took 44504 milliseconds, the dist-cache run took 52265 milliseconds.

|| client-hash || dist-cache ||
|GC time elapsed (ms)=25444 | GC time elapsed (ms)=95839 |
|CPU time spent (ms)=399890 | CPU time spent (ms)=894940 |

The advantages of implementing this seem to be rather slim to potentially negative.
> Use distributed cache to cache/localize dimension table & filter it in map task setup
> -------------------------------------------------------------------------------------
>                 Key: HIVE-3997
>                 URL:
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Gopal V
>            Assignee: Gopal V
> The hive clients are not always co-located with the hadoop/hdfs cluster.
> This means that the dimension table filtering, when done on the client side becomes very
slow. Not only that, the conversion of the small tables into hashtables has to be done every
single time a query is run with different filters on the big table.
> That entire hashtable has to be part of the job, which involves even more HDFS writes
from the far client side.
> Using the distributed cache also has the advantage that the localized files can be kept
between jobs instead of firing off an HDFS read for every query.
> Moving the operator pipeline for the hash generation into the map task itself has perhaps
a few cons.
> The map task might OOM due to this change, but it will take longer to recover until all
the map attempts fail, instead of being conditional on the client. The client has no idea
how much memory the hashtable needs and has to rely on the disk sizes (compressed sizes, perhaps)
to determine if it needs to fall back onto a reduce-join instead.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:

View raw message