hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vaibhav Gumashta (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HIVE-14901) HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
Date Mon, 06 Mar 2017 22:29:32 GMT

     [ https://issues.apache.org/jira/browse/HIVE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Vaibhav Gumashta updated HIVE-14901:
------------------------------------
    Target Version/s: 2.2.0  (was: 2.1.0)

> HiveServer2: Use user supplied fetch size to determine #rows serialized in tasks
> --------------------------------------------------------------------------------
>
>                 Key: HIVE-14901
>                 URL: https://issues.apache.org/jira/browse/HIVE-14901
>             Project: Hive
>          Issue Type: Sub-task
>          Components: HiveServer2, JDBC, ODBC
>    Affects Versions: 2.1.0
>            Reporter: Vaibhav Gumashta
>            Assignee: Norris Lee
>         Attachments: HIVE-14901.1.patch, HIVE-14901.2.patch, HIVE-14901.3.patch, HIVE-14901.4.patch,
HIVE-14901.5.patch, HIVE-14901.6.patch, HIVE-14901.7.patch, HIVE-14901.8.patch, HIVE-14901.9.patch,
HIVE-14901.patch
>
>
> Currently, we use {{hive.server2.thrift.resultset.max.fetch.size}} to decide the max
number of rows that we write in tasks. However, we should ideally use the user supplied value
(which can be extracted from the ThriftCLIService.FetchResults' request parameter) to decide
how many rows to serialize in a blob in the tasks. We should however use {{hive.server2.thrift.resultset.max.fetch.size}}
to have an upper bound on it, so that we don't go OOM in tasks and HS2. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message