spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gal Topper (JIRA)" <>
Subject [jira] [Commented] (SPARK-19476) Running threads in Spark DataFrame foreachPartition() causes NullPointerException
Date Sun, 26 Mar 2017 19:29:42 GMT


Gal Topper commented on SPARK-19476:

I'm pretty sure we're talking about different things. My code running inside foreachPartition
naturally doesn't need any Spark internals. It just takes the data and writes it to a database,
and that writing process happens not to be single-threaded. It doesn't copy any data, and
it works pretty well using the (far from trivial) workaround described and provided above.

If this thread local is too entrenched to feasibly fix the issue, I would at least suggest
that this limitation be documented (e.g. "@param iterator may only be accessed by the original
executor thread", and/or otherwise in the docs). That's what I'd do, anyway.

> Running threads in Spark DataFrame foreachPartition() causes NullPointerException
> ---------------------------------------------------------------------------------
>                 Key: SPARK-19476
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.6.0, 1.6.1, 1.6.2, 1.6.3, 2.0.0, 2.0.1, 2.0.2, 2.1.0
>            Reporter: Gal Topper
> First reported on [Stack overflow|].
> I use multiple threads inside foreachPartition(), which works great for me except for
when the underlying iterator is TungstenAggregationIterator. Here is a minimal code snippet
to reproduce:
> {code:title=Reproduce.scala|borderStyle=solid}
>     import
>     import scala.concurrent.duration.Duration
>     import scala.concurrent.{Await, Future}
>     import org.apache.spark.SparkContext
>     import org.apache.spark.sql.SQLContext
>     object Reproduce extends App {
>       val sc = new SparkContext("local", "reproduce")
>       val sqlContext = new SQLContext(sc)
>       import sqlContext.implicits._
>       val df = sc.parallelize(Seq(1)).toDF("number").groupBy("number").count()
>       df.foreachPartition { iterator =>
>         val f = Future(iterator.toVector)
>         Await.result(f, Duration.Inf)
>       }
>     }
> {code}
> When I run this, I get:
> {noformat}
>     java.lang.NullPointerException
>         at
>         at
>         at scala.collection.Iterator$$anon$
>         at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
> {noformat}
> I believe I actually understand why this happens - TungstenAggregationIterator uses a
ThreadLocal variable that returns null when called from a thread other than the original thread
that got the iterator from Spark. From examining the code, this does not appear to differ
between recent Spark versions.
> However, this limitation is specific to TungstenAggregationIterator, and not documented,
as far as I'm aware.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message