When this happens to me I have to do a full cluster restart. Even doing a rolling restart across the cluster doesn¡¯t seem to fix them, all of the nodes need to be stopped at the same time. After bringing everything back up the ring is correct.
Does anyone know how a cluster gets into this state?
At start up do you see log lines like this
Gossiper.java (line 576) Node /192.168.34.30 is now part of the cluster
Are all the nodes listed?
On 30 Jun 2010, at 22:50, ÍõÒ»·æ wrote:
In a cassandra cluster, when issueing ring command on every nodes, some can show all nodes in the cluster but some can only show some other nodes.
All nodes share the same seed list.
And even some of the nodes in the seed list have this problem.
Restarting the problematic nodes won't solve it.
Try closing firewalls with following commands
service iptables stop
Still won't work.
Anyone got a clue?
Thanks very much.