spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael Armbrust (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (SPARK-4365) Remove unnecessary filter call on records returned from parquet library
Date Fri, 14 Nov 2014 23:17:34 GMT

     [ https://issues.apache.org/jira/browse/SPARK-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Michael Armbrust resolved SPARK-4365.
-------------------------------------
    Resolution: Fixed

Issue resolved by pull request 3229
[https://github.com/apache/spark/pull/3229]

> Remove unnecessary filter call on records returned from parquet library
> -----------------------------------------------------------------------
>
>                 Key: SPARK-4365
>                 URL: https://issues.apache.org/jira/browse/SPARK-4365
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 1.1.0
>            Reporter: Yash Datta
>            Priority: Minor
>             Fix For: 1.2.0
>
>
> Since parquet library has been updated , we no longer need to filter the records returned
from parquet library for null records , as now the library skips those :
> from parquet-hadoop/src/main/java/parquet/hadoop/InternalParquetRecordReader.java
>   public boolean nextKeyValue() throws IOException, InterruptedException {
>     boolean recordFound = false;
>     while (!recordFound) {
>       // no more records left
>       if (current >= total) { return false; }
>       try {
>         checkRead();
>         currentValue = recordReader.read();
>         current ++; 
>         if (recordReader.shouldSkipCurrentRecord()) {
>           // this record is being filtered via the filter2 package
>           if (DEBUG) LOG.debug("skipping record");
>           continue;
>         }   
>         if (currentValue == null) {
>           // only happens with FilteredRecordReader at end of block
>           current = totalCountLoadedSoFar;
>           if (DEBUG) LOG.debug("filtered record reader reached end of block");
>           continue;
>         }   
>           recordFound = true;
>         if (DEBUG) LOG.debug("read value: " + currentValue);
>       } catch (RuntimeException e) {
>         throw new ParquetDecodingException(format("Can not read value at %d in block
%d in file %s", current, currentBlock, file), e); 
>       }   
>     }   
>     return true;
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message