spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Davies Liu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-3683) PySpark Hive query generates "NULL" instead of None
Date Tue, 28 Oct 2014 22:05:34 GMT

    [ https://issues.apache.org/jira/browse/SPARK-3683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14187563#comment-14187563
] 

Davies Liu commented on SPARK-3683:
-----------------------------------

This commit remove the special case for "NULL":
{code}
commit cf989601d0e784e1c3507720e64636891fe28292
Author: Cheng Lian <lian.cs.zju@gmail.com>
Date:   Fri May 30 22:13:11 2014 -0700

    [SPARK-1959] String "NULL" shouldn't be interpreted as null value

    JIRA issue: [SPARK-1959](https://issues.apache.org/jira/browse/SPARK-1959)

    Author: Cheng Lian <lian.cs.zju@gmail.com>

    Closes #909 from liancheng/spark-1959 and squashes the following commits:

    306659c [Cheng Lian] [SPARK-1959] String "NULL" shouldn't be interpreted as null value

diff --git a/sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveOperators.scala b/sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveOperators.scala
index f141139..d263c31 100644
--- a/sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveOperators.scala
+++ b/sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveOperators.scala
@@ -113,7 +113,6 @@ case class HiveTableScan(
   }

   private def unwrapHiveData(value: Any) = value match {
-    case maybeNull: String if maybeNull.toLowerCase == "null" => null
     case varchar: HiveVarchar => varchar.getValue
     case decimal: HiveDecimal => BigDecimal(decimal.bigDecimalValue)
     case other => other
{code}

So this should be a bug from Hive.

> PySpark Hive query generates "NULL" instead of None
> ---------------------------------------------------
>
>                 Key: SPARK-3683
>                 URL: https://issues.apache.org/jira/browse/SPARK-3683
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, SQL
>    Affects Versions: 1.1.0
>            Reporter: Tamas Jambor
>            Assignee: Davies Liu
>
> When I run a Hive query in Spark SQL, I get the new Row object, where it does not convert
Hive NULL into Python None instead it keeps it string 'NULL'. 
> It's only an issue with String type, works with other types.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message