incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Edward Capriolo <>
Subject Re: Nodetool doesn't shows two nodes
Date Mon, 18 Feb 2013 14:44:04 GMT
These issues are more cloud specific then they are cassandra specific.
Cloud executives tell me in white papers that cloud is awesome and you
can fire all your sysadmins and network people and save money.

This is what happens when you believe cloud executives and their white
papers, you spend 10+ hours troubleshooting cloud networking problems.

On Mon, Feb 18, 2013 at 9:12 AM, Boris Solovyov
<> wrote:
> I think it is actually more of a problem that there were no error messages
> or other indication of what went wrong in the setup where the nodes couldn't
> contact. Should I file issue report on this? Clearly Cassandra must have
> tried to contact some IP on port 7000 and failed. Why didn't it log? That
> would have saved me about 10 hours :-P
> On Sun, Feb 17, 2013 at 11:54 PM, Jared Biel <>
> wrote:
>> This is something that I found while using the multi-region snitch -
>> it uses public IPs for communication. See the original ticket here:
>> It'd be nice if
>> it used the private IPs to communicate with nodes that are in the same
>> region as itself, but I do not believe this is the case. Be aware that
>> you will be charged for external data transfer even for nodes in the
>> same region because the traffic will not fall under their free (for
>> same AZ) or reduced (for intra-AZ) tiers.
>> If you continue using this snitch in the mean time, it is not
>> necessary (or recommended) to have those ports open to
>> You'll simply need to add the public IPs of your C* servers to the
>> correct security group(s) to allow access.
>> There's something else that's a little strange about the EC2 snitches:
>> "us-east-1" is (incorrectly) represented as the datacenter "us-east".
>> Other regions are recognized and named properly (us-west-2 for
>> example) This is kind-of covered in the ticket here:
>> I wish it could
>> be fixed properly.
>> Good luck!
>> On 17 February 2013 16:16, Boris Solovyov <>
>> wrote:
>> > OK. I got it. I realized that storage_port wasn't actually open between
>> > the
>> > nodes, because it is using the public IP. (I did find this information
>> > in
>> > the docs, after looking more... it is in section on "Types of snitches."
>> > It
>> > explains everything I found by try and error.)
>> >
>> > After opening this port 7000 to all IP addresses, the cluster boots OK
>> > and
>> > the two nodes see each other. Now I have the happy result. But my nodes
>> > are
>> > wide open to the entire internet on port 7000. This is a serious
>> > problem.
>> > This obviously can't be put into production.
>> >
>> > I definitely need cross-continent deployment. Single AZ or single region
>> > deployment is not going to be enough. How do people solve this in
>> > practice?

View raw message