airflow-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF subversion and git services (Jira)" <>
Subject [jira] [Commented] (AIRFLOW-5889) AWS Batch Operator - API request limits should not fail a task
Date Tue, 17 Dec 2019 17:54:01 GMT


ASF subversion and git services commented on AIRFLOW-5889:

Commit 24ad6c6ef0ddf3ddd4dad507fb19a0758cb50cf6 in airflow's branch refs/heads/v1-10-test
from Darren Weber
[;h=24ad6c6 ]

[AIRFLOW-5889] Make polling for AWS Batch job status more resillient (#6765)

- errors in polling for job status should not fail
  the airflow task when the polling hits an API throttle
  limit; polling should detect those cases and retry a
  few times to get the job status, only failing the task
  when the job description cannot be retrieved
- added typing for the BatchProtocol method return
  types, based on the botocore.client.Batch types (NOT
- applied trivial format consistency using black, i.e.
  $ black -t py36 -l 96 {files}

(cherry picked from commit 479ee639219b1f3454b98c14811dfcdf7c4b4693)

> AWS Batch Operator - API request limits should not fail a task
> --------------------------------------------------------------
>                 Key: AIRFLOW-5889
>                 URL:
>             Project: Apache Airflow
>          Issue Type: Bug
>          Components: aws, contrib
>    Affects Versions: 1.10.2, 1.10.3, 1.10.4, 1.10.5, 1.10.6
>            Reporter: Darren Weber
>            Assignee: Darren Weber
>            Priority: Major
>              Labels: AWS, aws-batch
>             Fix For: 1.10.7
> The AWS Batch Operator attempts to use a boto3 feature that is not available and has
not been merged in years, see
>  - []
>  - see also []
> This is a curious case of premature optimization. So, in the meantime, this means that
the fallback is the exponential backoff routine for the status checks on the batch job. Unfortunately,
when the concurrency of Airflow jobs is very high (100's of tasks), this fallback polling
hits the AWS Batch API too hard and the AWS API throttle throws an error, which fails the
Airflow task, simply because the status is polled too frequently.  This results in Airflow
issuing a retry of this task, when the task is actually running already, resulting in duplicate
batch jobs.  Any exception thrown for an AWS API throttle limit should not fail the task,
but just pause the polling for job status and retry the job status poll.
> This is an example of an API throttle exception:
> {code:java}
> An error occurred (TooManyRequestsException) when calling the DescribeJobs operation
> (reached max retries: 4): Too Many Requests
> {code}
> This exception should be handled while waiting for a job to complete, it must not result
in a job-retry.
> Reduced polling rates help (, but
additional exception handling in the polling function is required.  Within the exception
handling code, a random pause on the polling routine could help to alleviate the API throttle
limits.  Maybe the class could expose a parameter for the rate of polling (or a callable)?
> Another consideration is possible use of something like the sensor-poke approach, with
rescheduling, so that the polling process does not occupy a worker for the full duration of
a batch job, e.g.
> - []
> If a rescheduling approach is adopted, the similar API throttle considerations apply.

This message was sent by Atlassian Jira

View raw message