spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hyukjin Kwon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-17636) Parquet filter push down doesn't handle struct fields
Date Thu, 22 Sep 2016 23:55:20 GMT

    [ https://issues.apache.org/jira/browse/SPARK-17636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514853#comment-15514853
] 

Hyukjin Kwon commented on SPARK-17636:
--------------------------------------

Yea, Spark implementation currently does not push down filters for nested fields and I remember
it was confirmes from committers. I will leave another comment after trying to search related
JIRAs and comments in Github as references to make sure.

> Parquet filter push down doesn't handle struct fields
> -----------------------------------------------------
>
>                 Key: SPARK-17636
>                 URL: https://issues.apache.org/jira/browse/SPARK-17636
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, SQL
>    Affects Versions: 1.6.2
>            Reporter: Mitesh
>            Priority: Minor
>
> Theres a *PushedFilters* for a simple numeric field, but not for a numeric field inside
a struct. Not sure if this is a Spark limitation because of Parquet, or only a Spark limitation.
> {quote} 
> scala> hc.read.parquet("s3a://some/parquet/file").select("day_timestamp", "sale_id")
> res5: org.apache.spark.sql.DataFrame = [day_timestamp: struct<timestamp:bigint,timezone:string>,
sale_id: bigint]
> scala> res5.filter("sale_id > 4").queryExecution.executedPlan
> res9: org.apache.spark.sql.execution.SparkPlan =
> Filter[23814] [args=(sale_id#86324L > 4)][outPart=UnknownPartitioning(0)][outOrder=List()]
> +- Scan ParquetRelation[day_timestamp#86302,sale_id#86324L] InputPaths: s3a://some/parquet/file,
PushedFilters: [GreaterThan(sale_id,4)]
> scala> res5.filter("day_timestamp.timestamp > 4").queryExecution.executedPlan
> res10: org.apache.spark.sql.execution.SparkPlan =
> Filter[23815] [args=(day_timestamp#86302.timestamp > 4)][outPart=UnknownPartitioning(0)][outOrder=List()]
> +- Scan ParquetRelation[day_timestamp#86302,sale_id#86324L] InputPaths: s3a://some/parquet/file
> {quote} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message