drill-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arina Ielchiieva (JIRA)" <j...@apache.org>
Subject [jira] [Created] (DRILL-5516) Use max allowed allocated memory when defining batch size for hbase record reader
Date Tue, 16 May 2017 10:53:04 GMT
Arina Ielchiieva created DRILL-5516:
---------------------------------------

             Summary: Use max allowed allocated memory when defining batch size for hbase
record reader
                 Key: DRILL-5516
                 URL: https://issues.apache.org/jira/browse/DRILL-5516
             Project: Apache Drill
          Issue Type: Improvement
          Components: Storage - HBase
    Affects Versions: 1.10.0
            Reporter: Arina Ielchiieva
            Assignee: Arina Ielchiieva


If early limit 0 optimization is set to true (alter session set `planner.enable_limit0_optimization`
= true), when executing limit 0 queries Drill will return data type from available metadata
if possible.
When Drill can not determine data types from metadata (or if early limit 0 optimization is
set to false), Drill will read first batch of data and determine schema.
Hbase reader determines max batch size using magic number (4000) which can lead to OOM when
row size is large. The overall vector/batch size issue will be reconsidered in future releases.This
is temporary fix to avoid OOM.

Instead of using rows number, we will use max allowed allocated memory which will default
to 64 mb. If first row in batch is larger than allowed default, it will be written in batch
but batch will contain only this row.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message