spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From JoshRosen <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-9393] [SQL] Fix several error-handling ...
Date Tue, 28 Jul 2015 02:31:59 GMT
Github user JoshRosen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7710#discussion_r35609643
  
    --- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformation.scala
---
    @@ -146,49 +198,83 @@ case class ScriptTransformation(
             }
           }
     
    -      val (inputSerde, inputSoi) = ioschema.initInputSerDe(input)
    -      val dataOutputStream = new DataOutputStream(outputStream)
    -      val outputProjection = new InterpretedProjection(input, child.output)
    +      writerThread.start()
     
    -      // TODO make the 2048 configurable?
    -      val stderrBuffer = new CircularBuffer(2048)
    -      // Consume the error stream from the pipeline, otherwise it will be blocked if
    -      // the pipeline is full.
    -      new RedirectThread(errorStream, // input stream from the pipeline
    -        stderrBuffer,                 // output to a circular buffer
    -        "Thread-ScriptTransformation-STDERR-Consumer").start()
    +      outputIterator
    +    }
     
    -      // Put the write(output to the pipeline) into a single thread
    -      // and keep the collector as remain in the main thread.
    -      // otherwise it will causes deadlock if the data size greater than
    -      // the pipeline / buffer capacity.
    -      new Thread(new Runnable() {
    -        override def run(): Unit = {
    -          Utils.tryWithSafeFinally {
    -            iter
    -              .map(outputProjection)
    -              .foreach { row =>
    -              if (inputSerde == null) {
    -                val data = row.mkString("", ioschema.inputRowFormatMap("TOK_TABLEROWFORMATFIELD"),
    -                  ioschema.inputRowFormatMap("TOK_TABLEROWFORMATLINES")).getBytes("utf-8")
    -
    -                outputStream.write(data)
    -              } else {
    -                val writable = inputSerde.serialize(
    -                  row.asInstanceOf[GenericInternalRow].values, inputSoi)
    -                prepareWritable(writable).write(dataOutputStream)
    -              }
    -            }
    -            outputStream.close()
    -          } {
    -            if (proc.waitFor() != 0) {
    --- End diff --
    
    To more clearly see why the old code could hang indefinitely, take a look at what's going
on in this `finally` block: if the writer thread dies then the child process will not exit,
so this thread will stay stuck in the `finally` block.  Since the OutputStream is never closed
the child process will not be able to exit due to a broken pipe, so we'll hang indefinitely.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message