storm-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From David Shepherd <>
Subject Storm cluster only sees 1 worker node at a time - but swtiches between nodes
Date Thu, 05 Feb 2015 19:05:09 GMT
I have set up a Storm cluster on 3 vms running on openstack.  Zookeeper is running on separate
VMs. 1 VM has the nimbus process, and the UI process, and no workers, and the other 2 machines
each have a worker with 2 ports.  The workers were created by provisioning a VM from an image
using the openstack tooling, so they are identical.

Everything starts up with no errors in any logs I can fin.  However when I go to the Storm
UI, the cluster summary says there is only 1 supervisor with 2 slots,  and the supervisor
summary on has 1 row, but the host name switches between storm-worker-1 and storm-worker-2.
It seems to be seeing both machines, but only 1 at a time.

The storm.yaml is:


    - "zookeeper-host-1"

    - "zookeeper-host-2"

    - "zookeeper-host-3"

storm.zookeeper.port: 2181 "nimbus"

nimbus.thrift.port: 6627

storm.local.dir: "/var/opt/stormtmp"

java.library.path: "/usr/local/lib"


     - 6700

worker.childopts: "-Xmx768m"

nimbus.childopts: "-Xmx512m"

supervisor.childopts: "-Xmx256m"

ui.childopts: "-Xmx512m"

I am totally stumped - can't find anything on this behavior anywhere online or in any of the
books I have.  If anyone else has experienced this and can point me in the right direction
it would be much appreciated.



View raw message