airflow-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Emmanuel Brard <>
Subject Re: airflow 1.9 tasks randomly failing | k8 - hive
Date Fri, 24 Aug 2018 13:48:00 GMT

We have a similar setup with Airflow 1.9 on Kubernetes with the Celery

We saw some airflow tasks being killed inside the container because of
cgroup limits (set by kubernetes and pushed to the docker daemon) but not
airflow itself (airflow celery command) which ended up in zombie being
identified by the scheduler (setting the task to fail). We decreased the
number of celery slots on each worker reducing memory pressure and this
stoped. It happened mostly with one DAG which had simple operators but was
quite big with subdags involved.

Maybe you are facing the same issue ?


On Fri, Aug 24, 2018 at 3:40 PM Dubey, Somesh <>

> We are on AirFlow 1.9 on Kubernetes. We have many hive DAGs which we call
> via JDBC (via rest end point) and randomly tasks fail. One day one would
> fail and next day some other.
> The pods are all healthy and there is no node eviction/any issues and
> retries typically works. Also even when the tasks fails it completes on the
> hive side successfully.
> We do not get good logs of the failure and typically get partial sql in
> sysout in the task log with no error message. Loglevel increased to 5 in
> jdbc connection.
> This only happens in production.
> Not been able to reproduce the issue in dev.
> The closest we have gotten in dev to replicate similar behavior is to kill
> the pid (manually killing airflow run --raw) in worker node.
> That case also things run fine on hive side but task fails in Airflow.
> Similar task log with no error is received.
> Have you seen this kind of behavior. Any help is much appreciated.
> Thanks so much,
> Somesh


GetYourGuide AG

Stampfenbachstrasse 48  

8006 Zürich



  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message