spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wangfei (X)" <>
Subject Re: Spark SQL on large number of columns
Date Tue, 19 May 2015 11:04:23 GMT
And which version are you using

发自我的 iPhone

在 2015年5月19日,18:29,"ayan guha" <<>>

can you kindly share your code?

On Tue, May 19, 2015 at 8:04 PM, madhu phatak <<>>
I  am trying run spark sql aggregation on a file with 26k columns. No of rows is very small.
I am running into issue that spark is taking huge amount of time to parse the sql and create
a logical plan. Even if i have just one row, it's taking more than 1 hour just to get pass
the parsing. Any idea how to optimize in these kind of scenarios?

Madhukara Phatak

Best Regards,
Ayan Guha
View raw message