spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "sam (JIRA)" <>
Subject [jira] [Created] (SPARK-26770) Misleading/unhelpful error message when wrapping a null in an Option
Date Tue, 29 Jan 2019 13:26:00 GMT
sam created SPARK-26770:

             Summary: Misleading/unhelpful error message when wrapping a null in an Option
                 Key: SPARK-26770
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 2.3.2
            Reporter: sam



// Using options to indicate nullable fields
case class Product(productID: Option[Int],
                               productName: Option[String])

val productExtract: Dataset[Product] =
            productID = Some(6050286),
            // user mistake here, should be `None` not `Some(null)`
            productName = Some(null)



will give an error like the one below.  This error is thrown from quite deep down, but there
should be some handling logic further up to check for nulls and to give a more informative
error message.  E.g. it could tell the user which field is null, it could detect the `Some(null)`
error and suggest using `None`.

Whatever the exception it shouldn't be NPE, since this is clearly a user error, so should
be some kind of user error exception.

Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage
1.0 failed 4 times, most recent failure: Lost task 9.3 in stage 1.0 (TID 276,,
executor 1): java.lang.NullPointerException
	at org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter.write(
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.serializefromobject_doConsume_0$(Unknown
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.mapelements_doConsume_0$(Unknown
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:620)
	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
	at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.executor.Executor$
	at java.util.concurrent.ThreadPoolExecutor.runWorker(
	at java.util.concurrent.ThreadPoolExecutor$


I've seen quite a few other people with this error, but I don't think it's for the same reason:!topic/spark-connector-user/Dt6ilC9Dn54

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message