drill-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (DRILL-5356) Refactor Parquet Record Reader
Date Wed, 29 Mar 2017 15:03:42 GMT

    [ https://issues.apache.org/jira/browse/DRILL-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15947297#comment-15947297
] 

ASF GitHub Bot commented on DRILL-5356:
---------------------------------------

Github user ppadma commented on a diff in the pull request:

    https://github.com/apache/drill/pull/789#discussion_r108557760
  
    --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/ParquetRecordReader.java
---
    @@ -307,164 +231,49 @@ public FragmentContext getFragmentContext() {
         return fragmentContext;
       }
     
    -  /**
    -   * Returns data type length for a given {@see ColumnDescriptor} and it's corresponding
    -   * {@see SchemaElement}. Neither is enough information alone as the max
    -   * repetition level (indicating if it is an array type) is in the ColumnDescriptor
and
    -   * the length of a fixed width field is stored at the schema level.
    -   *
    -   * @return the length if fixed width, else -1
    -   */
    -  private int getDataTypeLength(ColumnDescriptor column, SchemaElement se) {
    -    if (column.getType() != PrimitiveType.PrimitiveTypeName.BINARY) {
    -      if (column.getMaxRepetitionLevel() > 0) {
    -        return -1;
    -      }
    -      if (column.getType() == PrimitiveType.PrimitiveTypeName.FIXED_LEN_BYTE_ARRAY) {
    -        return se.getType_length() * 8;
    -      } else {
    -        return getTypeLengthInBits(column.getType());
    -      }
    -    } else {
    -      return -1;
    -    }
    -  }
    -
    -  @SuppressWarnings({ "resource", "unchecked" })
       @Override
       public void setup(OperatorContext operatorContext, OutputMutator output) throws ExecutionSetupException
{
         this.operatorContext = operatorContext;
    -    if (!isStarQuery()) {
    -      columnsFound = new boolean[getColumns().size()];
    -      nullFilledVectors = new ArrayList<>();
    +    if (isStarQuery()) {
    +      schema = new ParquetSchema(fragmentContext.getOptions(), rowGroupIndex);
    +    } else {
    +      schema = new ParquetSchema(fragmentContext.getOptions(), getColumns());
         }
    -    columnStatuses = new ArrayList<>();
    -    List<ColumnDescriptor> columns = footer.getFileMetaData().getSchema().getColumns();
    -    allFieldsFixedLength = true;
    -    ColumnDescriptor column;
    -    ColumnChunkMetaData columnChunkMetaData;
    -    int columnsToScan = 0;
    -    mockRecordsRead = 0;
     
    -    MaterializedField field;
    +//    ParquetMetadataConverter metaConverter = new ParquetMetadataConverter();
    +//    FileMetaData fileMetaData;
     
    --- End diff --
    
    instead of commenting, remove these lines if not needed. 


> Refactor Parquet Record Reader
> ------------------------------
>
>                 Key: DRILL-5356
>                 URL: https://issues.apache.org/jira/browse/DRILL-5356
>             Project: Apache Drill
>          Issue Type: Improvement
>    Affects Versions: 1.10.0, 1.11.0
>            Reporter: Paul Rogers
>            Assignee: Paul Rogers
>            Priority: Minor
>             Fix For: 1.11.0
>
>
> The Parquet record reader class is a key part of Drill that has evolved over time to
become somewhat hard to follow.
> A number of us are working on Parquet-related tasks and find we have to spend an uncomfortable
amount of time trying to understand the code. In particular, this writer needs to figure out
how to convince the reader to provide higher-density record batches.
> Rather than continue to decypher the complex code multiple times, this ticket requests
to refactor the code to make it functionally identical, but structurally cleaner. The result
will be faster time to value when working with this code.
> This is a lower-priority change and will be coordinated with others working on this code
base. This ticket is only for the record reader class itself; it does not include the various
readers and writers that Parquet uses since another project is actively modifying those classes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message