flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kaepke, Marc" <marc.kae...@haw-hamburg.de>
Subject Re: Standalone cluster - taskmanager settings ignored
Date Fri, 11 Aug 2017 15:34:43 GMT
I start my cluster with:
bigdata@master:/usr/lib/flink-1.3.2$ ./bin/start-cluster.sh
Starting cluster.
Starting jobmanager daemon on host master.
Starting taskmanager daemon on host master.
Starting taskmanager daemon on host slave1.
Starting taskmanager daemon on host slave3.


And if I stop it:
bigdata@master:/usr/lib/flink-1.3.2$ ./bin/stop-cluster.sh
Stopping taskmanager daemon (pid: 27050) on host master.
Stopping taskmanager daemon (pid: 2091) on host slave1.
Stopping taskmanager daemon (pid: 12684) on host slave3.
Stopping jobmanager daemon (pid: 26636) on host master.


My previous cluster included additionally slave5.

My current cluster has not slave5. But the WebUI shows 4 TM -> master, slave1, slave3 and
slave5

Am 11.08.2017 um 17:25 schrieb Kaepke, Marc <marc.kaepke@haw-hamburg.de<mailto:marc.kaepke@haw-hamburg.de>>:

Hi,

I have a cluster of 4 dedicated machines (no VMs). My previous config was: 1 master and 3
slaves. Each machine provides a task- or jobmanager.

Now I want to reduce my cluster and have 1 master and 3 slaves, but one machine provides a
jobmanager and one task manager in parallel. I changed all conf/slaves files. While I start
my cluster everything seems well for 2 seconds -> one JM and 3 TM with each 8 cores/slots.
Two seconds later I see 4 taskmanger and one JM. I also can run a job with 32 slots (4 TM
* 8 slots) without any errors.

Why does my cluster has 4 task manager?! All slaves files are cleaned and contains 3 inputs


Thanks!

Marc


Mime
View raw message