airflow-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (Jira)" <>
Subject [jira] [Commented] (AIRFLOW-4965) Handling throttling in GCP AI operators
Date Fri, 11 Oct 2019 07:25:00 GMT


ASF GitHub Bot commented on AIRFLOW-4965:

mik-laj commented on pull request #6305: [AIRFLOW-4965] Handle quote exceptions in GCP AI
   Make sure you have checked _all_ steps below.
   ### Jira
   - [ ] My PR addresses the following [Airflow Jira](
issues and references them in the PR title. For example, "\[AIRFLOW-XXX\] My Airflow PR"
     - In case you are fixing a typo in the documentation you can prepend your commit with
\[AIRFLOW-XXX\], code changes always need a Jira issue.
     - In case you are proposing a fundamental code change, you need to create an Airflow
Improvement Proposal ([AIP](
     - In case you are adding a dependency, check if the license complies with the [ASF 3rd
Party License Policy](
   ### Description
   - [ ] Here are some details about my PR, including screenshots of any UI changes:
   ### Tests
   - [ ] My PR adds the following unit tests __OR__ does not need testing for this extremely
good reason:
   ### Commits
   - [ ] My commits all reference Jira issues in their subject lines, and I have squashed
multiple commits if they address the same issue. In addition, my commits follow the guidelines
from "[How to write a good git commit message](":
     1. Subject is separated from body by a blank line
     1. Subject is limited to 50 characters (not including Jira issue reference)
     1. Subject does not end with a period
     1. Subject uses the imperative mood ("add", not "adding")
     1. Body wraps at 72 characters
     1. Body explains "what" and "why", not "how"
   ### Documentation
   - [ ] In case of new functionality, my PR adds documentation that describes how to use
     - All the public functions and the classes in the PR contain docstrings that explain
what it does
     - If you implement backwards incompatible changes, please leave a note in the [](
so we can assign it to a appropriate release
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

> Handling throttling in GCP AI operators
> ---------------------------------------
>                 Key: AIRFLOW-4965
>                 URL:
>             Project: Apache Airflow
>          Issue Type: Improvement
>          Components: gcp
>    Affects Versions: 1.10.3
>            Reporter: Kamil
>            Assignee: Kamil
>            Priority: Minor
> Polidea develops Apache Airflow operators for following Google Cloud AI services:
>  * Cloud Translate
>  * Cloud Vision
>  * Cloud Text-To-Speech
>  * Cloud Speech-To-Text
>  * Cloud Translate Speech
>  * Cloud Natural Language
>  * Cloud Video Intelligence
> Those API implement quota verification and throttle requests that exceed the quota. Here
are the relevant links describing it:
> [] 
> []
> []
> []
> []
> []
> There are several types of quotas and limits:
> *Translate:*
>  * characters per day [403 - error  “Daily Limit Exceeded”]
>  * characters per 100 seconds (per project or per project/user) [403 error “User Rate
Limit Exceeded”] [TEMPORARY]
> *Vision:*
>  * image file size
>  * requests per minute [TEMPORARY]
>  * images per feature per month 
> *Text to speech:*
>  * total characters per request
>  * requests per minute [TEMPORARY]
>  * characters per minute [TEMPORARY]
> *Speech to text*
>  * limits of the content size
>  * limits of the phrases/characters per request for context
>  * requests per 60 seconds [TEMPORARY]
>  * processing per day
> *Natural Language*
>  * Text Content size 
>  * Token quota and Entity mentions (ignored?)
>  * requests per 100 seconds [TEMPORARY]
>  * requests per day 
> *Video Intelligence*
>  * video size
>  * requests per minute [TEMPORARY]
>  * backend time in seconds per minute [TEMPORARY]
> In all Cloud AI operators we are using Python Client API. Most methods are using built-in
object and Retry mechanism. The assumption is that for functions that use the mechanism, it
is implemented correctly and by default “retriable” errors only are retried. User can
configure behaviour of the Retry object - exponential back-off factor, delays, etc. In the
current API Retry object can be provided by the user creating the DAG and using the operator:
> The APIS that use Retry object are:
>  * *Cloud Vision Product Search*
>  * *Cloud Vision Extra*
>  * *Cloud Vision Detect*
>  * *Cloud Natural Language*
>  * *Cloud Speech*
>  * *Cloud Video Intelligence*
> **
> The Retry mechanism provided by the Client API should be enough to handle temporary bursts
of requests. User can control exponential back-off rate and will be able to adjust it to their
own needs. They are also able to manually restart failed jobs using standard Airflow mechanisms
in case their configuration is not well adjusted to their limits.
> The only case where Retry is not used in the API is Translate operator - specifically
> In case of Translate API, the proposal is to use Retry decorator in our own hook and
perform retries only in case of *“User Rate Limit Exceeded”* error, all other errors (size
limit and Daily Limit Exceeded) should be treated as non-retriable. In those cases users will
be able to manually restart failed jobs. 
> h1. Implementation
> We analyzed two solutions:
>  # extension of the built-in mechanism from google-cloud-python library - Retry
>  # external library - tenacity
> The use of the first solution seems natural, but it is problematic. Each method creates
a retry object by default from a configuration based on a private file with configuration.
>  Reference: 
>  [
>  ] [
>  ]If we would like to extend this mechanism, we would have to copy the logic to this configuration.
The google-cloud-python library does not allow us to easily change only part of the configuration
of retry object.
> The retry mechanism is not supported by all services (See: [Current approach|]),
so there is a need to create a separate mechanism. The new mechanism based on the external
library will work with all services. This will provide a more predictable developer experience.
> The tenacity library provides a code retry mechanism based on the decorator. It use wait
strategy that applies exponential backoff. All hook methods that are covered by quota restrictions
will get a new decorator.
> Sample implementation:
> {{@tenacity.retry(}}
>  {{    wait=tenacity.wait_exponential(min=1, max=100),}}
>  {{    retry=retry_if_temporary_quota(),}}
>  {{)}}
>  {{def fetch():}}
>  {{    response = client.translate(TEXT, target_language="PL")['translatedText']}}
>  {{    return response|}}
> _retry_if_temporary_quota_ is a factory method that creates a predicate to check if the
exception concerns the quota restriction.

This message was sent by Atlassian Jira

View raw message