spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From unk1102 <>
Subject How to tune unavoidable group by query?
Date Fri, 09 Oct 2015 19:07:16 GMT
Hi I have the following group by query which I tried to use it both using
DataFrame and hiveContext.sql() but both shuffles huge data and is slow. I
have around 8 fields passed in as group by fields"blabla").groupby("col1","col2","col3",..."col8").agg("bla
hiveContext.sql("insert into table partitions bla bla group by

I have tried almost all tuning parameters like tungsten,lz4 shuffle, more around 6.0 I am using Spark 1.4.0 please guide thanks in

View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message