spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-23308) ignoreCorruptFiles should not ignore retryable IOException
Date Wed, 07 Feb 2018 00:26:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-23308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16354784#comment-16354784
] 

Steve Loughran commented on SPARK-23308:
----------------------------------------

bq. I have not heard this come up before as an issue in another implementation.

S3A's input stream handles an IOE other than EOF with a: increment metrics, close the stream,
retry once; generally that causes the error to be recovered from. If not, you are into the
unrecoverable-network-problems kind of problem, except for the special case of "you are recycling
the pool of HTTP connections and should abort that TCP connection before trying anything else".
I think there are opportunities to improve S3A there by aborting the connection before retrying.

I don't think Spark is in the position to  be clever about retries, as its too low-level as
to what is retryable vs not; it would need a policy for all possible exceptions from all known
FS clients and split them into "we can recover" from "no, fail fast"

Trying to come up with a good policy is (a) something the FS clients should be doing and (b)
really hard to get right in the absence of frequent failures; its usually evolution based
on bug reports. For example [S3ARetryPolicy|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java#L87]
is very much a WiP (HADOOP-14531).

Marcio: surprised you are getting so many socket timeouts. If this is happening in EC2 it's
*potentially* throttling related; overloaded connection pools raise ConnectionPoolTimeoutException,
apparently.

> ignoreCorruptFiles should not ignore retryable IOException
> ----------------------------------------------------------
>
>                 Key: SPARK-23308
>                 URL: https://issues.apache.org/jira/browse/SPARK-23308
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.2.1
>            Reporter: Márcio Furlani Carmona
>            Priority: Minor
>
> When `spark.sql.files.ignoreCorruptFiles` is set it totally ignores any kind of RuntimeException
or IOException, but some possible IOExceptions may happen even if the file is not corrupted.
> One example is the SocketTimeoutException which can be retried to possibly fetch the
data without meaning the data is corrupted.
>  
> See: 
> https://github.com/apache/spark/blob/e30e2698a2193f0bbdcd4edb884710819ab6397c/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileScanRDD.scala#L163



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message