cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ackerman, Mitchell" <>
Subject RE: VPC AWS
Date Wed, 11 Jun 2014 18:11:44 GMT
FYI, we have established an OpenVPN/NAT mesh between our regions with good success.

From: Peter Sanford []
Sent: Wednesday, June 11, 2014 8:57 AM
Subject: Re: VPC AWS

Tinc's developers acknowledge that there are some fairly serious unfixed security issues in
their protocol: As such, I do not consider tinc to be a
good choice for production systems.

Either IPSec or OpenVPN are reasonable for connecting VPCs in different regions, and Amazon
has published guides for both methods[1][2]. We use IPSec because we have a lot of experience
with it, but I'm hesitant to recommend it because it is easy to configure in an insecure manner.


On Tue, Jun 10, 2014 at 6:29 PM, Ben Bromhead <<>>
Have a look at, mesh based and handles multiple gateways for the
same network in a graceful manner (so you can run two gateways per region for HA).

Also supports NAT traversal if you need to do public-private clusters.

We are currently evaluating it for our managed Cassandra in a VPC solution, but we haven’t
ever used it in a production environment or with a heavy load, so caveat emptor.

As for the snitch… the GPFS is definitely the most flexible.

Ben Bromhead
Instaclustr |<> | @instaclustr<>
| +61 415 936 359<tel:%2B61%20415%20936%20359>

On 10 Jun 2014, at 1:42 am, Ackerman, Mitchell <<>>


I too am working on setting up a multi-region VPC Cassandra cluster.  Each region is connected
to each other via an OpenVPN tunnel, so we can use internal IP addresses for both the seeds
and broadcast address.   This allows us to use the EC2Snitch (my interpretation of the caveat
that this snitch won’t work in a multi-region environment is that it won’t work if you
can’t use internal IP addresses, which we can via the VPN tunnels).  All the C* nodes find
each other, and nodetool (or OpsCenter) shows that we have established a multi-datacenter

Thus far, I’m not happy with the performance of the cluster in such a configuration, but
I don’t think that it is related to this configuration, though it could be.


From: Peter Sanford []
Sent: Monday, June 09, 2014 7:19 AM
Subject: Re: VPC AWS

Your general assessments of the limitations of the Ec2 snitches seem to match what we've found.
We're currently using the GossipingPropertyFileSnitch in our VPCs. This is also the snitch
to use if you ever want to have a DC in EC2 and a DC with another hosting provider.


On Mon, Jun 9, 2014 at 5:48 AM, Alain RODRIGUEZ <<>>
Hi guys, there is a lot of answer, it looks like this subject is interesting a lot of people,
so I will end up letting you know how it went for us.

For now, we are still doing some tests.

Yet I would like to know how we are supposed to configure Cassandra in this environment :

- Multiple datacenters (should be VPCs, one per region, linked through VPN ?)
- Cassandra 1.2

We are currently running under EC2MultiRegionSnitch, but with no VPC. Our VPC will have no
public interface, so I am not sure how to configure broadcast address or seeds that are supposed
to be the public IP of the node.

I could use EC2Snitch, but will cross region work properly ?

Should I use an other snitch ?

Is someone using a similar configuration ?

Thanks for information already given guys, we will achieve this ;-).

2014-06-07 0:05 GMT+02:00 Jonathan Haddad <<>>:

This may not help you with the migration, but it may with maintenance & management.  I
just put up a blog post on managing VPC security groups with a tool I open sourced at my previous
company.  If you're going to have different VPCs (staging / prod), it might help with managing
security groups.

Semi shameless plug... but relevant.

On Thu, Jun 5, 2014 at 12:01 PM, Aiman Parvaiz <<>>
Cool, thanks again for this.

On Thu, Jun 5, 2014 at 11:51 AM, Michael Theroux <<>>
You can have a ring spread across EC2 and the public subnet of a VPC.  That is how we did
our migration.  In our case, we simply replaced the existing EC2 node with a new instance
in the public VPC, restored from a backup taken right before the switch.


From: Aiman Parvaiz <<>>
To: Michael Theroux <<>>
Cc: "<>" <<>>
Sent: Thursday, June 5, 2014 2:39 PM
Subject: Re: VPC AWS

Thanks for this info Michael. As far as restoring node in public VPC is concerned I was thinking
( and I might be wrong here) if we can have a ring spread across EC2 and public subnet of
a VPC, this way I can simply decommission nodes in Ec2 as I gradually introduce new nodes
in public subnet of VPC and I will end up with a ring in public subnet and then migrate them
from public to private in a similar way may be.

If anyone has any experience/ suggestions with this please share, would really appreciate


On Thu, Jun 5, 2014 at 10:37 AM, Michael Theroux <<>>
The implementation of moving from EC2 to a VPC was a bit of a juggling act.  Our motivation
was two fold:

1) We were running out of static IP addresses, and it was becoming increasingly difficult
in EC2 to design around limiting the number of static IP addresses to the number of public
IP addresses EC2 allowed
2) VPC affords us an additional level of security that was desirable.

However, we needed to consider the following limitations:

1) By default, you have a limited number of available public IPs for both EC2 and VPC.
2) AWS security groups need to be configured to allow traffic for Cassandra to/from instances
in EC2 and the VPC.

You are correct at the high level that the migration goes from EC2->Public VPC (VPC with
an Internet Gateway)->Private VPC (VPC with a NAT).  The first phase was moving instances
to the public VPC, setting broadcast and seeds to the public IPs we had available.  Basically:

1) Take down a node, taking a snapshot for a backup
2) Restore the node on the public VPC, assigning it to the correct security group, manually
setting the seeds to other available nodes
3) Verify the cluster can communicate
4) Repeat

Realize the NAT instance on the private subnet will also require a public IP.  What got really
interesting is that near the end of the process we ran out of available IPs, requiring us
to switch the final node that was on EC2 directly to the private VPC (and taking down two
nodes at once, which our setup allowed given we had 6 nodes with an RF of 3).

What we did, and highly suggest for the switch, is to write down every step that has to happen
on every node during the switch.  In our case, many of the moved nodes required slightly different
configurations for items like the seeds.

Its been a couple of years, so my memory on this maybe a little fuzzy :)


From: Aiman Parvaiz <<>>
To:<>; Michael Theroux <<>>
Sent: Thursday, June 5, 2014 12:55 PM
Subject: Re: VPC AWS

Thanks for the response, I am about to head in to something very similar if not exactly same.
I envision things happening on the same lines as you mentioned.
I would be grateful if you could please throw some more light on how you went about switching
cassandra nodes from public subnet to private with out any downtime.
I have not started on this project yet, still in my research phase. I plan to have a ec2+public
VPC cluster and then decomission ec2 nodes to have everything in public subnet, next would
be to move it to private subnet.


On Thu, Jun 5, 2014 at 8:14 AM, Michael Theroux <<>>
We personally use the EC2Snitch, however, we don't have the multi-region requirements you


From: Alain RODRIGUEZ <<>>
Sent: Thursday, June 5, 2014 9:14 AM
Subject: Re: VPC AWS

I think you can define VPC subnet to be public (to have public + private IPs) or private only.

Any insight regarding snitches ? What snitch do you guys use ?

2014-06-05 15:06 GMT+02:00 William Oberman <<>>:
I don't think traffic will flow between "classic" ec2 and vpc directly. There is some kind
of gateway bridge instance that sits between, acting as a NAT.   I would think that would
cause new challenges for:

Sorry this response isn't heavy on content!  I'm curious how this thread goes...


On Thursday, June 5, 2014, Alain RODRIGUEZ <<>>
Hi guys,

We are going to move from a cluster made of simple Amazon EC2 servers to a VPC cluster. We
are using Cassandra 1.2.11 and I have some questions regarding this switch and the Cassandra
configuration inside a VPC.

Actually I found no documentation on this topic, but I am quite sure that some people are
already using VPC. If you can point me to any documentation regarding VPC / Cassandra, it
would be very nice of you. We have only one DC for now, but we need to remain multi DC compatible,
since we will add DC very soon.

Else, I would like to know if I should keep using EC2MultiRegionSnitch or change the snitch
to anything else.

What about broadcast/listen ip, seeds...?

We currently use public ip as for broadcast address and for seeds. We use private ones for
listen address. Machines inside the VPC will only have private IP AFAIK. Should I keep using
a broadcast address ?

Is there any other incidence when switching to a VPC ?

Sorry if the topic was already discussed, I was unable to find any useful information...

Will Oberman
Civic Science, Inc.
6101 Penn Avenue, Fifth Floor
Pittsburgh, PA 15206
(M) 412-480-7835<tel:412-480-7835>

Jon Haddad<>
skype: rustyrazorblade

View raw message