hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lefty Leverenz (JIRA)" <>
Subject [jira] [Commented] (HIVE-12049) HiveServer2: Provide an option to write serialized thrift objects in final tasks
Date Fri, 14 Oct 2016 06:29:20 GMT


Lefty Leverenz commented on HIVE-12049:

HIVE-14876 changes the default value of hive.server2.thrift.resultset.max.fetch.size from
1000 to 10000 in 2.2.0.

> HiveServer2: Provide an option to write serialized thrift objects in final tasks
> --------------------------------------------------------------------------------
>                 Key: HIVE-12049
>                 URL:
>             Project: Hive
>          Issue Type: Sub-task
>          Components: HiveServer2, JDBC
>    Affects Versions: 2.0.0
>            Reporter: Rohit Dholakia
>            Assignee: Rohit Dholakia
>              Labels: TODOC2.1
>             Fix For: 2.1.0
>         Attachments: HIVE-12049.1.patch, HIVE-12049.11.patch, HIVE-12049.12.patch, HIVE-12049.13.patch,
HIVE-12049.14.patch, HIVE-12049.15.patch, HIVE-12049.16.patch, HIVE-12049.17.patch, HIVE-12049.18.patch,
HIVE-12049.19.patch, HIVE-12049.2.patch, HIVE-12049.25.patch, HIVE-12049.26.patch, HIVE-12049.3.patch,
HIVE-12049.4.patch, HIVE-12049.5.patch, HIVE-12049.6.patch, HIVE-12049.7.patch, HIVE-12049.9.patch,
new-driver-profiles.png, old-driver-profiles.png
> For each fetch request to HiveServer2, we pay the penalty of deserializing the row objects
and translating them into a different representation suitable for the RPC transfer. In a moderate
to high concurrency scenarios, this can result in significant CPU and memory wastage. By having
each task write the appropriate thrift objects to the output files, HiveServer2 can simply
stream a batch of rows on the wire without incurring any of the additional cost of deserialization
and translation. 
> This can be implemented by writing a new SerDe, which the FileSinkOperator can use to
write thrift formatted row batches to the output file. Using the pluggable property of the
{{hive.query.result.fileformat}}, we can set it to use SequenceFile and write a batch of thrift
formatted rows as a value blob. The FetchTask can now simply read the blob and send it over
the wire. On the client side, the *DBC driver can read the blob and since it is already formatted
in the way it expects, it can continue building the ResultSet the way it does in the current

This message was sent by Atlassian JIRA

View raw message