spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mariusz Galus (JIRA)" <>
Subject [jira] [Commented] (SPARK-22255) SPARK-22255 FileAppender InputStream Read timeout and blocking state
Date Wed, 11 Oct 2017 18:46:00 GMT


Mariusz Galus commented on SPARK-22255:

I would like an answer on why we need to block. I am using the RollingFileAppender with Java
Piped I/O Stream classes. I am sending Kafka records to a PipedOutputStream that is linked
to the FileAppender via the PipedInputStream.

If I do allow it to block, in my custom case, I end up getting a Broken Pipe because my FileOutputStream
closes and the InputStream that is read blocked throws a IOException for Broken Pipe and then
I'll get flooded with Read End Dead IOExceptions.

I may be overlooking something critical to the overall implementation, as I am just using
this small piece in a custom solution.

> SPARK-22255 FileAppender InputStream Read timeout and blocking state
> --------------------------------------------------------------------
>                 Key: SPARK-22255
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.2.0
>            Reporter: Mariusz Galus
>            Priority: Minor
> The FileAppender logic when reading from InputStream blocks. This can be simply avoided
with a InputStream.available() check prior to reading. 
> If this is done, a variable for estimated available bytes needs to be instantiated to
use in the conditionals. The conditional for reading from the inputstream and the conditional
for appending to the file.
> See:

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message