drill-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (DRILL-5516) Use max allowed allocated memory when defining batch size for hbase record reader
Date Wed, 17 May 2017 21:14:04 GMT

    [ https://issues.apache.org/jira/browse/DRILL-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014783#comment-16014783

ASF GitHub Bot commented on DRILL-5516:

Github user paul-rogers commented on a diff in the pull request:

    --- Diff: contrib/storage-hbase/src/main/java/org/apache/drill/exec/store/hbase/HBaseRecordReader.java
    @@ -187,8 +189,8 @@ public int next() {
         int rowCount = 0;
    -    done:
    -    for (; rowCount < TARGET_RECORD_COUNT; rowCount++) {
    +    // if first row is larger than allowed max size in batch, it will be added anyway
    +    do {
    --- End diff --
    Still need to monitor row count: row count cannot exceed 64K. So, loop termination is
either exceeds memory limit OR reaches max row count. For row count, might as well use the
original limit, unless we know enough to pick a better limit.

> Use max allowed allocated memory when defining batch size for hbase record reader
> ---------------------------------------------------------------------------------
>                 Key: DRILL-5516
>                 URL: https://issues.apache.org/jira/browse/DRILL-5516
>             Project: Apache Drill
>          Issue Type: Improvement
>          Components: Storage - HBase
>    Affects Versions: 1.10.0
>            Reporter: Arina Ielchiieva
>            Assignee: Arina Ielchiieva
> If early limit 0 optimization is set to true (alter session set `planner.enable_limit0_optimization`
= true), when executing limit 0 queries Drill will return data type from available metadata
if possible.
> When Drill can not determine data types from metadata (or if early limit 0 optimization
is set to false), Drill will read first batch of data and determine schema.
> Hbase reader determines max batch size using magic number (4000) which can lead to OOM
when row size is large. The overall vector/batch size issue will be reconsidered in future
releases.This is temporary fix to avoid OOM.
> Instead of using rows number, we will use max allowed allocated memory which will default
to 64 mb. If first row in batch is larger than allowed default, it will be written in batch
but batch will contain only this row.

This message was sent by Atlassian JIRA

View raw message