airflow-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Neiheisel <g...@astronomer.io>
Subject Re: How to stop a airflow worker from starting new jobs
Date Tue, 14 May 2019 13:07:49 GMT
Hey Sachin, Celery should take care of this kind of "warm shutdown" for you
-
https://docs.celeryproject.org/en/latest/userguide/workers.html#process-signals
.

If you bump the image tag on the worker Kubernetes Deployment to upgrade
something, kube/docker will send a SIGTERM to the worker pods, telling
celery to stop taking tasks and finish out running tasks. You can set a
terminationGracePeriodSeconds on the Deployment to tell kube to wait x
seconds before finally sending a SIGKILL to forcibly kill off a pod. That
behavior is documented here -
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#podspec-v1-core
.

If you aren't upgrading your image tag or anything, you could probably
scale the workers to 0 replicas, then back up to get the same behavior
without changing anything on the Deployment.

On Tue, May 14, 2019 at 3:24 AM Sachin <parmar.sachin@gmail.com> wrote:

> Hi,
>
> I have a airflow setup with Celery executors on kubernetes. The cluster has
> many workers pods picking jobs from different queues (rabbitmq).
>
> I want some of my airflow worker pods to continue with jobs which it is
> running currently and stop fetching/starting any new jobs from the queue.
> On completion of the running job, I will restart the worker pod.
>
> Rest of the airflow worker pods in the cluster will continue with running
> jobs and starting new jobs.
>
> Is there a way to achive this?
>
> Thanks,
> Sachin
>


-- 
*Greg Neiheisel* / CTO Astronomer.io

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message