storm-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stig Rohde Døssing <>
Subject Re: Kill_workers cli not working as expected
Date Tue, 30 Apr 2019 20:57:55 GMT
I believe kill_workers is for cleaning up workers if e.g. you want to shut
down a supervisor node, or if you have an unstable machine you want to take
out of the cluster. The command was introduced because simply killing the
supervisor process would leave the workers alive.

If you want to kill the workers and keep them dead, you should also kill
the supervisor on that machine.

More context at

Den tir. 30. apr. 2019 kl. 22.28 skrev Mitchell Rathbun (BLOOMBERG/ 731
LEX) <>:

> We currently run both Nimbus and Supervisor on the same cluster. When
> running 'storm kill_workers', I have noticed that all of the workers are
> killed, but then are restarted. In the supervisor log I see the following
> for each topology:
> 2019-04-30 16:21:17,571 INFO Slot [SLOT_19227] STATE KILL_AND_RELAUNCH
> msInState: 5 topo:WingmanTopology998-1-1556594165 worker:f0de5
> 54d-81a1-48ce-82e8-9beef009969b -> WAITING_FOR_WORKER_START msInState: 0
> topo:WingmanTopology998-1-1556594165 worker:f0de554d-81a1-48c
> e-82e8-9beef009969b
> 2019-04-30 16:21:25,574 INFO Slot [SLOT_19227] STATE
> topo:WingmanTopology998-1-1556594165 wo
> rker:f0de554d-81a1-48ce-82e8-9beef009969b -> RUNNING msInState: 0
> topo:WingmanTopology998-1-1556594165 worker:f0de554d-81a1-48ce-82e8-
> 9beef009969b
> Is this the expected behavior (worker process is bounced, not killed)? I
> thought that kill_workers would essentially run 'storm kill' for each of
> the worker processes.

View raw message