incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Theroux <mthero...@yahoo.com>
Subject Re: VPC AWS
Date Thu, 05 Jun 2014 12:44:35 GMT
Hello Alain,

We switched from EC2 to VPC a couple of years ago.  The process for us was long, slow, and
multi step for our (at the time) 6 node cluster.

In our case, we don't need to consider multi-DC.  However, in our infrastructure we were
rapidly running out of IP addresses, and wished to move to VPC to give us a nearly inexhaustible
supply.  In addition, AWS VPC gives us an additional layer of security for our Cassandra
cluster. 

To do this, we setup our VPC to have both private and public subnets.  Public subnets were
accessible to the Internet (when instances were assigned a public IP), while private subnets
could not (although instances on the subnet could access the Internet via a NAT instance).
 We wished for to be Cassandra on the private subnet.  However, this introduced a complication.
 EC2 instances would not be able to communicate directly to our VPC instances on a private
subnet. 

So, to achieve this, while still having an operating Cassandra DB without downtime, we essentially
had to stage Cassandra instances on our public subnet, assigning IPs and reconfiguring nodes
until we had a mixed EC2/VPC Public subnet cluster, then start moving systems to the private
subnet, continuing the process until all instances were on a private subnet.  During the
process we carefully orchestrated configuration like broadcast and seeds to make sure the
cluster continued to function properly and all nodes could communicate with each other.  We
also had to carefully orchestrate the assigning of AWS security groups to make sure everyone
could talk to each other during this process.

Also keep in mind that the use of public IPs for communications will add to your AWS costs.
 During our transition we had to do this for a short time while EC2 instances were communicating
with VPC instances, but we were able to switch to 100% internal IPs when we completed (you
will still get inter availability zone charges regardless)

This process was complex enough that I wrote detailed series of steps, for each node in our
cluster.

-Mike
 

________________________________
 From: Alain RODRIGUEZ <arodrime@gmail.com>
To: user@cassandra.apache.org 
Sent: Thursday, June 5, 2014 8:12 AM
Subject: VPC AWS
 


Hi guys,

We are going to move from a cluster made of simple Amazon EC2 servers to a VPC cluster. We
are using Cassandra 1.2.11 and I have some questions regarding this switch and the Cassandra
configuration inside a VPC.

Actually I found no documentation on this topic, but I am quite sure that some people are
already using VPC. If you can point me to any documentation regarding VPC / Cassandra, it
would be very nice of you. We have only one DC for now, but we need to remain multi DC compatible,
since we will add DC very soon.

Else, I would like to know if I should keep using EC2MultiRegionSnitch or change the snitch
to anything else.

What about broadcast/listen ip, seeds...?

We currently use public ip as for broadcast address and for seeds. We use private ones for
listen address. Machines inside the VPC will only have private IP AFAIK. Should I keep using
a broadcast address ?

Is there any other incidence when switching to a VPC ?

Sorry if the topic was already discussed, I was unable to find any useful information...
Mime
View raw message