spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "xdcjie (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SPARK-24339) spark sql can not prune column in transform/map/reduce query
Date Tue, 22 May 2018 05:13:00 GMT

     [ https://issues.apache.org/jira/browse/SPARK-24339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

xdcjie updated SPARK-24339:
---------------------------
    Priority: Minor  (was: Major)

> spark sql can not prune column in transform/map/reduce query
> ------------------------------------------------------------
>
>                 Key: SPARK-24339
>                 URL: https://issues.apache.org/jira/browse/SPARK-24339
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.1.1, 2.1.2, 2.2.0, 2.2.1
>            Reporter: xdcjie
>            Priority: Minor
>              Labels: map, reduce, sql, transform
>             Fix For: 2.1.1, 2.1.2, 2.2.0, 2.2.1
>
>
> I was using {{TRANSFORM USING}} with branch-2.1/2.2, and noticed that it will scan
all column of data, query like:
> {code:java}
> SELECT TRANSFORM(usid, cch) USING 'python test.py' AS (u1, c1, u2, c2) FROM test_table;{code}
> it's physical plan like:
> {code:java}
> == Physical Plan ==
> ScriptTransformation [usid#17, cch#9], python test.py, [u1#784, c1#785, u2#786, c2#787],
HiveScriptIOSchema(List(),List(),Some(org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe),Some(org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe),List((field.delim,
)),List((field.delim,	)),Some(org.apache.hadoop.hive.ql.exec.TextRecordReader),Some(org.apache.hadoop.hive.ql.exec.TextRecordWriter),false)
> +- FileScan parquet [sh#0L,clk#1L,chg#2L,qey#3,ship#4,chgname#5,sid#6,bid#7,dis#8L,cch#9,wch#10,wid#11L,arank#12L,rtag#13,iid#14,uid#15,pid#16,usid#17,wdid#18,bid#19,oqleft#20,oqright#21,poqvalue#22,tm#23,...
367 more fields] Batched: false, Format: Parquet, Location: InMemoryFileIndex[file:/Users/Downloads/part-r-00093-0ef5d59f-2e08-4085-9b46-458a1652932a.g...,
PartitionFilters: [], PushedFilters: [], ReadSchema: struct<sh:bigint,clk:bigint,chg:bigint,qey:string,ship:string,chgname:string,s...
> {code}
> In our scenario, parquet has 400 column, this query will take more time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message