airflow-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Neiheisel <>
Subject Re: How to stop a airflow worker from starting new jobs
Date Tue, 14 May 2019 13:07:49 GMT
Hey Sachin, Celery should take care of this kind of "warm shutdown" for you

If you bump the image tag on the worker Kubernetes Deployment to upgrade
something, kube/docker will send a SIGTERM to the worker pods, telling
celery to stop taking tasks and finish out running tasks. You can set a
terminationGracePeriodSeconds on the Deployment to tell kube to wait x
seconds before finally sending a SIGKILL to forcibly kill off a pod. That
behavior is documented here -

If you aren't upgrading your image tag or anything, you could probably
scale the workers to 0 replicas, then back up to get the same behavior
without changing anything on the Deployment.

On Tue, May 14, 2019 at 3:24 AM Sachin <> wrote:

> Hi,
> I have a airflow setup with Celery executors on kubernetes. The cluster has
> many workers pods picking jobs from different queues (rabbitmq).
> I want some of my airflow worker pods to continue with jobs which it is
> running currently and stop fetching/starting any new jobs from the queue.
> On completion of the running job, I will restart the worker pod.
> Rest of the airflow worker pods in the cluster will continue with running
> jobs and starting new jobs.
> Is there a way to achive this?
> Thanks,
> Sachin

*Greg Neiheisel* / CTO

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message