drill-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (DRILL-4905) Push down the LIMIT to the parquet reader scan to limit the numbers of records read
Date Wed, 28 Sep 2016 17:18:21 GMT

    [ https://issues.apache.org/jira/browse/DRILL-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15530280#comment-15530280
] 

ASF GitHub Bot commented on DRILL-4905:
---------------------------------------

Github user jinfengni commented on a diff in the pull request:

    https://github.com/apache/drill/pull/597#discussion_r80970307
  
    --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetGroupScan.java
---
    @@ -115,6 +115,8 @@
       private List<RowGroupInfo> rowGroupInfos;
       private Metadata.ParquetTableMetadataBase parquetTableMetadata = null;
       private String cacheFileRoot = null;
    +  private int batchSize;
    +  private static final int DEFAULT_BATCH_LENGTH = 256 * 1024;
    --- End diff --
    
    Now I guess I understand what you mean.  You want to cap store.parquet.record_batch_size
to 256K. And set DEFAULT_BATCH_LENGTH =256k in ParquetGroupScan. At execution time, you pick
min of batch_size and value from option, which will be no greater than the option value. 
    
    If that's correct, can we remove DEFAULT_BATCH_LENGTH in ParquetGroupScan. In stead, use
the batch_size specified in the new option you added for NON-LIMIT case? 
    



> Push down the LIMIT to the parquet reader scan to limit the numbers of records read
> -----------------------------------------------------------------------------------
>
>                 Key: DRILL-4905
>                 URL: https://issues.apache.org/jira/browse/DRILL-4905
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Storage - Parquet
>    Affects Versions: 1.8.0
>            Reporter: Padma Penumarthy
>            Assignee: Padma Penumarthy
>             Fix For: 1.9.0
>
>
> Limit the number of records read from disk by pushing down the limit to parquet reader.
> For queries like
> select * from <table> limit N; 
> where N < size of Parquet row group, we are reading 32K/64k rows or entire row group.
This needs to be optimized to read only N rows.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message