drill-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (DRILL-5941) Skip header / footer logic works incorrectly for Hive tables when file has several input splits
Date Mon, 20 Nov 2017 06:53:00 GMT

    [ https://issues.apache.org/jira/browse/DRILL-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16258863#comment-16258863

ASF GitHub Bot commented on DRILL-5941:

Github user ppadma commented on the issue:

    @arina-ielchiieva I am concerned about performance impact by grouping all splits in a
single reader (essentially, not parallelizing at all).
    Wondering if it is possible to do this way:
    During planning, in HiveScan,  if it is text file and has header/footer, get the number
of rows to skip. Read the header/footer rows and based on that, adjust the first/last split
and offset within them. The splits which have only header/footer rows can be removed from
inputSplits. In HiveSubScan, change hiveReadEntry to be a list (one entry for each split).
Add an entry in hiveReadEntry, numRowsToSkip (or offsetToStart) which can be passed to the
recordReaders in getBatch for each subScan. This is fairly complicated and I am sure I might
be missing some details :-)

> Skip header / footer logic works incorrectly for Hive tables when file has several input
> -----------------------------------------------------------------------------------------------
>                 Key: DRILL-5941
>                 URL: https://issues.apache.org/jira/browse/DRILL-5941
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Storage - Hive
>    Affects Versions: 1.11.0
>            Reporter: Arina Ielchiieva
>            Assignee: Arina Ielchiieva
>             Fix For: Future
> *To reproduce*
> 1. Create csv file with two columns (key, value) for 3000029 rows, where first row is
a header.
> The data file has size of should be greater than chunk size of 256 MB. Copy file to the
distributed file system.
> 2. Create table in Hive:
> {noformat}
>   `key` bigint,
>   `value` string)
>   'org.apache.hadoop.mapred.TextInputFormat'
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
>   'maprfs:/tmp/h_table'
>  'skip.header.line.count'='1');
> {noformat}
> 3. Execute query {{select * from hive.h_table}} in Drill (query data using Hive plugin).
The result will return less rows then expected. Expected result is 3000028 (total count minus
one row as header).
> *The root cause*
> Since file is greater than default chunk size, it's split into several fragments, known
as input splits. For example:
> {noformat}
> maprfs:/tmp/h_table/h_table.csv:0+268435456
> maprfs:/tmp/h_table/h_table.csv:268435457+492782112
> {noformat}
> TextHiveReader is responsible for handling skip header and / or footer logic.
> Currently Drill creates reader [for each input split|https://github.com/apache/drill/blob/master/contrib/storage-hive/core/src/main/java/org/apache/drill/exec/store/hive/HiveScanBatchCreator.java#L84]
and skip header and /or footer logic is applied for each input splits, though ideally the
above mentioned input splits should have been read by one reader, so skip / header footer
logic was applied correctly.

This message was sent by Atlassian JIRA

View raw message