spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nan Zhu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-20251) Spark streaming skips batches in a case of failure
Date Mon, 10 Apr 2017 00:16:41 GMT

    [ https://issues.apache.org/jira/browse/SPARK-20251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962318#comment-15962318
] 

Nan Zhu commented on SPARK-20251:
---------------------------------

more details here, by "be proceeding", I mean it is expected that the compute() method for
the next batch was executed before the app is shutdown, however, the app should be eventually
shutdown since we have signalled the awaiting condition set in awaitTermination()....

however, this "eventual shutdown" was not happened...(this issue did not consistently happen)

> Spark streaming skips batches in a case of failure
> --------------------------------------------------
>
>                 Key: SPARK-20251
>                 URL: https://issues.apache.org/jira/browse/SPARK-20251
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.1.0
>            Reporter: Roman Studenikin
>
> We are experiencing strange behaviour of spark streaming application. Sometimes it just
skips batch in a case of job failure and starts working on the next one.
> We expect it to attempt to reprocess batch, but not to skip it. Is it a bug or we are
missing any important configuration params?
> Screenshots from spark UI:
> http://pasteboard.co/1oRW0GDUX.png
> http://pasteboard.co/1oSjdFpbc.png



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message