Return-Path: X-Original-To: apmail-spark-issues-archive@minotaur.apache.org Delivered-To: apmail-spark-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 91214173C4 for ; Sat, 25 Apr 2015 21:57:38 +0000 (UTC) Received: (qmail 53583 invoked by uid 500); 25 Apr 2015 21:57:38 -0000 Delivered-To: apmail-spark-issues-archive@spark.apache.org Received: (qmail 53551 invoked by uid 500); 25 Apr 2015 21:57:38 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 53541 invoked by uid 99); 25 Apr 2015 21:57:38 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 25 Apr 2015 21:57:38 +0000 Date: Sat, 25 Apr 2015 21:57:38 +0000 (UTC) From: "Sean Owen (JIRA)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (SPARK-5203) union with different decimal type report error MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/SPARK-5203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Owen updated SPARK-5203: ----------------------------- Assignee: guowei > union with different decimal type report error > ---------------------------------------------- > > Key: SPARK-5203 > URL: https://issues.apache.org/jira/browse/SPARK-5203 > Project: Spark > Issue Type: Bug > Components: SQL > Reporter: guowei > Assignee: guowei > Fix For: 1.4.0 > > > Test case like this: > {code:sql} > create table test (a decimal(10,1)); > select a from test union all select a*2 from test; > {code} > Exception thown: > {noformat} > 15/01/12 16:28:54 ERROR SparkSQLDriver: Failed in [select a from test union all select a*2 from test] > org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Unresolved attributes: *, tree: > 'Project [*] > 'Subquery _u1 > 'Union > Project [a#1] > MetastoreRelation default, test, None > Project [CAST((CAST(a#2, DecimalType()) * CAST(CAST(2, DecimalType(10,0)), DecimalType())), DecimalType(21,1)) AS _c0#0] > MetastoreRelation default, test, None > at org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$$anonfun$1.applyOrElse(Analyzer.scala:85) > at org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$$anonfun$1.applyOrElse(Analyzer.scala:83) > at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:144) > at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:135) > at org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$.apply(Analyzer.scala:83) > at org.apache.spark.sql.catalyst.analysis.Analyzer$CheckResolution$.apply(Analyzer.scala:81) > at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61) > at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59) > at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51) > at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:60) > at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:34) > at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59) > at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51) > at scala.collection.immutable.List.foreach(List.scala:318) > at org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51) > at org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:410) > at org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:410) > at org.apache.spark.sql.SQLContext$QueryExecution.withCachedData$lzycompute(SQLContext.scala:411) > at org.apache.spark.sql.SQLContext$QueryExecution.withCachedData(SQLContext.scala:411) > at org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan$lzycompute(SQLContext.scala:412) > at org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan(SQLContext.scala:412) > at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:417) > at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:415) > at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:421) > at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:421) > at org.apache.spark.sql.hive.HiveContext$QueryExecution.stringResult(HiveContext.scala:369) > at org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:58) > at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:275) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423) > at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:211) > at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org