airflow-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "George Leslie-Waksman (JIRA)" <>
Subject [jira] [Commented] (AIRFLOW-513) ExternalTaskSensor tasks should not count towards parallelism limit
Date Mon, 03 Oct 2016 02:56:21 GMT


George Leslie-Waksman commented on AIRFLOW-513:

What's wrong with workaround 2?

The point of parallelism is to control how many processes are running. A sensor that's waiting
is in a while loop eating processing time, so you don't really want to ignore them for parallelism's

Short timeout, with tuned retry delay, and retry counts, will work just fine.

> ExternalTaskSensor tasks should not count towards parallelism limit
> -------------------------------------------------------------------
>                 Key: AIRFLOW-513
>                 URL:
>             Project: Apache Airflow
>          Issue Type: Improvement
>         Environment: Ubuntu 14.04
> Version 1.7.0
>            Reporter: Kevin Yuen
> Hi, 
> We are using airflow version 1.7.0 and we are using `ExternalTaskSensor` pretty heavily
to manage dependencies between our DAGs. 
> We have recently experienced a case where the external task sensors are causing the DAGs
to go into limbo state because they took up all the execution slots defined via `AIRFLOW__CORE__PARALLELISM`.

> For example: 
>     Given we have 2 DAGs: 
>     first one with 16 python operator tasks, and the other with 16 sensors. We set `PARALLELISM`
to 16. 
>     If the scheduler choses to schedule all 16 sensors first, the dag runs will never
> There are a couple of work around to this:
> # staggering the DAGs so that the first dag with python operator runs first
> # lowering the TaskSensor timeout thresholds and relying on retries
> Both of these options seems less then ideal to us. We wonder if `ExternalTaskSensor`
should really be counting towards the `PARALLELISM` limit?
> Cheers, 
> Kevin

This message was sent by Atlassian JIRA

View raw message