flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-3190) Retry rate limits for DataStream API
Date Wed, 06 Jul 2016 08:48:11 GMT

    [ https://issues.apache.org/jira/browse/FLINK-3190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364008#comment-15364008

ASF GitHub Bot commented on FLINK-3190:

Github user tillrohrmann commented on a diff in the pull request:

    --- Diff: flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/restart/FailureRateRestartStrategy.java
    @@ -35,19 +34,21 @@
      * with a fixed time delay in between.
     public class FailureRateRestartStrategy implements RestartStrategy {
    -	private final int maxFailuresPerUnit;
    -	private final TimeUnit failureRateUnit;
    -	private final long delayBetweenRestartAttempts;
    -	private List<Long> restartTimestamps = new ArrayList<>();
    +	private final Duration failuresInterval;
    +	private final Duration delayInterval;
    +	private EvictingQueue<Long> restartTimestampsQueue;
     	private boolean disabled = false;
    -	public FailureRateRestartStrategy(int maxFailuresPerUnit, TimeUnit failureRateUnit,
long delayBetweenRestartAttempts) {
    -		Preconditions.checkArgument(maxFailuresPerUnit > 0, "Maximum number of restart attempts
per time unit must be greater than 0.");
    -		Preconditions.checkArgument(delayBetweenRestartAttempts >= 0, "Delay between restart
attempts must be positive");
    +	public FailureRateRestartStrategy(int maxFailuresPerInterval, Duration failuresInterval,
Duration delayInterval) {
    +		Preconditions.checkArgument(maxFailuresPerInterval > 0, "Maximum number of restart
attempts per time unit must be greater than 0.");
    +		Preconditions.checkNotNull(failuresInterval, "Failures interval cannot be null.");
    +		Preconditions.checkNotNull(failuresInterval.length() > 0, "Failures interval must
be greater than 0 ms.");
    --- End diff --

> Retry rate limits for DataStream API
> ------------------------------------
>                 Key: FLINK-3190
>                 URL: https://issues.apache.org/jira/browse/FLINK-3190
>             Project: Flink
>          Issue Type: Improvement
>            Reporter: Sebastian Klemke
>            Assignee: Michał Fijołek
>            Priority: Minor
> For a long running stream processing job, absolute numbers of retries don't make much
sense: The job will accumulate transient errors over time and will die eventually when thresholds
are exceeded. Rate limits are better suited in this scenario: A job should only die, if it
fails too often in a given time frame. To better overcome transient errors, retry delays could
be used, as suggested in other issues.
> Absolute numbers of retries can still make sense, if failing operators don't make any
progress at all. We can measure progress by OperatorState changes and by observing output,
as long as the operator in question is not a sink. If operator state changes and/or operator
produces output, we can assume it makes progress.
> As an example, let's say we configured a retry rate limit of 10 retries per hour and
a non-sink operator A. If the operator fails once every 10 minutes and produces output between
failures, it should not lead to job termination. But if the operator fails 11 times in an
hour or does not produce output between 11 consecutive failures, job should be terminated.

This message was sent by Atlassian JIRA

View raw message