spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Franck Tago (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-23519) Create View Commands Fails with The view output (col1,col1) contains duplicate column name
Date Thu, 10 May 2018 17:50:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-23519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16470843#comment-16470843
] 

Franck Tago commented on SPARK-23519:
-------------------------------------

I do not agree with the 'typical database' claim . 

mysql , oracle  , hive support this  syntax. 

 

example

!image-2018-05-10-10-48-57-259.png!

> Create View Commands Fails with  The view output (col1,col1) contains duplicate column
name
> -------------------------------------------------------------------------------------------
>
>                 Key: SPARK-23519
>                 URL: https://issues.apache.org/jira/browse/SPARK-23519
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, SQL
>    Affects Versions: 2.2.1
>            Reporter: Franck Tago
>            Priority: Major
>         Attachments: image-2018-05-10-10-48-57-259.png
>
>
> 1- create and populate a hive table  . I did this in a hive cli session .[ not that
this matters ]
> create table  atable (col1 int) ;
> insert  into atable values (10 ) , (100)  ;
> 2. create a view from the table.  
> [These actions were performed from a spark shell ]
> spark.sql("create view  default.aview  (int1 , int2 ) as select  col1 , col1 from
atable ")
>  java.lang.AssertionError: assertion failed: The view output (col1,col1) contains duplicate
column name.
>  at scala.Predef$.assert(Predef.scala:170)
>  at org.apache.spark.sql.execution.command.ViewHelper$.generateViewProperties(views.scala:361)
>  at org.apache.spark.sql.execution.command.CreateViewCommand.prepareTable(views.scala:236)
>  at org.apache.spark.sql.execution.command.CreateViewCommand.run(views.scala:174)
>  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>  at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
>  at org.apache.spark.sql.Dataset.<init>(Dataset.scala:183)
>  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:68)
>  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message