spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Harish (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (SPARK-17908) Column names Corrupted in pysaprk dataframe groupBy
Date Thu, 13 Oct 2016 16:58:20 GMT

    [ https://issues.apache.org/jira/browse/SPARK-17908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15572478#comment-15572478
] 

Harish edited comment on SPARK-17908 at 10/13/16 4:58 PM:
----------------------------------------------------------

Yes. Your code structure is same as mine.. But i have 70M records with 1000 columns. It works
with simple joins as above. But when you try to modify the DF multiple times this will happen,
as i was getting this error from 1.6.0 but i didn't raise because i cant prove this with working
use case. But it happens frequently with my code so i tried with rename

Here my steps:
df = df.select('key1', 'key2', 'key3', 'val','total') -70Million records
df =df.withColumn('key2', 'ABC')
df1= df.groupBy('key1', 'key2', 'key3').agg(func.count(func.col('val')).alias('total'))
df1 = df1.columnRenamed('key2', 'key2')
df3 =df.join(df1, ['key1', 'key2', 'key3'])\
        .withcolumn('newcol', func.col('val')/func.col('total'))


I just wanted to see if any one else observed this behavior, I will try to find the code sample
to proof this issue. If not in another 1-2 days i will mark it not reproducible.  




was (Author: harishk15):
Yes. You are code structure is same as mine.. But i have 70M records with 1000 columns. It
works with simple joins as above. But when you try to modify the DF multiple times this will
happen, as i was getting this error from 1.6.0 but i didn't raise because i cant prove this
with working use case. But it happens frequently with my code so i tried with rename

Here my steps:
df = df.select('key1', 'key2', 'key3', 'val','total') -70Million records
df =df.withColumn('key2', 'ABC')
df1= df.groupBy('key1', 'key2', 'key3').agg(func.count(func.col('val')).alias('total'))
df1 = df1.columnRenamed('key2', 'key2')
df3 =df.join(df1, ['key1', 'key2', 'key3'])\
        .withcolumn('newcol', func.col('val')/func.col('total'))


I just wanted to see if any one else observed this behavior, I will try to find the code sample
to proof this issue. If not in another 1-2 days i will mark it not reproducible.  



> Column names Corrupted in pysaprk dataframe groupBy
> ---------------------------------------------------
>
>                 Key: SPARK-17908
>                 URL: https://issues.apache.org/jira/browse/SPARK-17908
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 1.6.0, 1.6.1, 1.6.2, 2.0.0, 2.0.1
>            Reporter: Harish
>            Priority: Minor
>
> I have DF say df
> df1= df.groupBy('key1', 'key2', 'key3').agg(func.count(func.col('val')).alias('total'))
> df3 =df.join(df1, ['key1', 'key2', 'key3'])\
>              .withcolumn('newcol', func.col('val')/func.col('total'))
> I am getting key2 is not present in df1, which is not truw becuase df1.show () is having
the data with the key2.
> Then i added this code  before join-- df1 = df1.columnRenamed('key2', 'key2') renamed
with same name. Then it works.
> Stack trace will say column missing, but it is npt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message