spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] [spark] HyukjinKwon commented on a change in pull request #25214: [SPARK-28461][SQL] Pad Decimal numbers with trailing zeros to the scale of the column
Date Fri, 23 Aug 2019 02:08:07 GMT
HyukjinKwon commented on a change in pull request #25214: [SPARK-28461][SQL] Pad Decimal numbers
with trailing zeros to the scale of the column
URL: https://github.com/apache/spark/pull/25214#discussion_r316951489
 
 

 ##########
 File path: docs/sql-migration-guide-upgrade.md
 ##########
 @@ -159,6 +159,32 @@ license: |
 
   - Since Spark 3.0, Dataset query fails if it contains ambiguous column reference that is
caused by self join. A typical example: `val df1 = ...; val df2 = df1.filter(...);`, then
`df1.join(df2, df1("a") > df2("a"))` returns an empty result which is quite confusing.
This is because Spark cannot resolve Dataset column references that point to tables being
self joined, and `df1("a")` is exactly the same as `df2("a")` in Spark. To restore the behavior
before Spark 3.0, you can set `spark.sql.analyzer.failAmbiguousSelfJoin` to `false`.
 
+  - Since Spark 3.0, we pad decimal numbers with trailing zeros to the scale of the column
for Hive result, for example:
 
 Review comment:
   To me, it looks not clear what Hive result is.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message