flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-3190) Retry rate limits for DataStream API
Date Fri, 17 Jun 2016 13:01:05 GMT

    [ https://issues.apache.org/jira/browse/FLINK-3190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335988#comment-15335988
] 

ASF GitHub Bot commented on FLINK-3190:
---------------------------------------

Github user tillrohrmann commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1954#discussion_r67505960
  
    --- Diff: docs/setup/config.md ---
    @@ -139,6 +140,15 @@ Default value is 1.
     - `restart-strategy.fixed-delay.delay`: Delay between restart attempts, used if the default
restart strategy is set to "fixed-delay".
     Default value is the `akka.ask.timeout`.
     
    +- `restart-strategy.failure-rate.max-failures-per-unit`: Maximum number of restarts in
given time unit before failing a job in "failure-rate" strategy. 
    +Default value is 1.
    +
    +- `restart-strategy.failure-rate.failure-rate-unit`: Time unit for measuring failure
rate in "failure-rate" strategy. One of java.util.concurrent.TimeUnit values .
    +Default value is `MINUTES`.
    --- End diff --
    
    I think it's better to specify an arbitrary interval.


> Retry rate limits for DataStream API
> ------------------------------------
>
>                 Key: FLINK-3190
>                 URL: https://issues.apache.org/jira/browse/FLINK-3190
>             Project: Flink
>          Issue Type: Improvement
>            Reporter: Sebastian Klemke
>            Assignee: Michał Fijołek
>            Priority: Minor
>
> For a long running stream processing job, absolute numbers of retries don't make much
sense: The job will accumulate transient errors over time and will die eventually when thresholds
are exceeded. Rate limits are better suited in this scenario: A job should only die, if it
fails too often in a given time frame. To better overcome transient errors, retry delays could
be used, as suggested in other issues.
> Absolute numbers of retries can still make sense, if failing operators don't make any
progress at all. We can measure progress by OperatorState changes and by observing output,
as long as the operator in question is not a sink. If operator state changes and/or operator
produces output, we can assume it makes progress.
> As an example, let's say we configured a retry rate limit of 10 retries per hour and
a non-sink operator A. If the operator fails once every 10 minutes and produces output between
failures, it should not lead to job termination. But if the operator fails 11 times in an
hour or does not produce output between 11 consecutive failures, job should be terminated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message