incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ben Bromhead <...@instaclustr.com>
Subject Re: VPC AWS
Date Wed, 11 Jun 2014 01:29:05 GMT
Have a look at http://www.tinc-vpn.org/, mesh based and handles multiple gateways for the same
network in a graceful manner (so you can run two gateways per region for HA).

Also supports NAT traversal if you need to do public-private clusters. 

We are currently evaluating it for our managed Cassandra in a VPC solution, but we haven’t
ever used it in a production environment or with a heavy load, so caveat emptor. 

As for the snitch… the GPFS is definitely the most flexible. 

Ben Bromhead
Instaclustr | www.instaclustr.com | @instaclustr | +61 415 936 359

On 10 Jun 2014, at 1:42 am, Ackerman, Mitchell <Mitchell.Ackerman@pgi.com> wrote:

> Peter,
>  
> I too am working on setting up a multi-region VPC Cassandra cluster.  Each region is
connected to each other via an OpenVPN tunnel, so we can use internal IP addresses for both
the seeds and broadcast address.   This allows us to use the EC2Snitch (my interpretation
of the caveat that this snitch won’t work in a multi-region environment is that it won’t
work if you can’t use internal IP addresses, which we can via the VPN tunnels).  All the
C* nodes find each other, and nodetool (or OpsCenter) shows that we have established a multi-datacenter
cluster. 
>  
> Thus far, I’m not happy with the performance of the cluster in such a configuration,
but I don’t think that it is related to this configuration, though it could be.
>  
> Mitchell
>  
> From: Peter Sanford [mailto:psanford@retailnext.net] 
> Sent: Monday, June 09, 2014 7:19 AM
> To: user@cassandra.apache.org
> Subject: Re: VPC AWS
>  
> Your general assessments of the limitations of the Ec2 snitches seem to match what we've
found. We're currently using the GossipingPropertyFileSnitch in our VPCs. This is also the
snitch to use if you ever want to have a DC in EC2 and a DC with another hosting provider.

>  
> -Peter
>  
> 
> On Mon, Jun 9, 2014 at 5:48 AM, Alain RODRIGUEZ <arodrime@gmail.com> wrote:
> Hi guys, there is a lot of answer, it looks like this subject is interesting a lot of
people, so I will end up letting you know how it went for us.
>  
> For now, we are still doing some tests.
>  
> Yet I would like to know how we are supposed to configure Cassandra in this environment
:
>  
> - VPC 
> - Multiple datacenters (should be VPCs, one per region, linked through VPN ?)
> - Cassandra 1.2
>  
> We are currently running under EC2MultiRegionSnitch, but with no VPC. Our VPC will have
no public interface, so I am not sure how to configure broadcast address or seeds that are
supposed to be the public IP of the node.
>  
> I could use EC2Snitch, but will cross region work properly ?
>  
> Should I use an other snitch ?
>  
> Is someone using a similar configuration ?
>  
> Thanks for information already given guys, we will achieve this ;-).
>  
> 
> 2014-06-07 0:05 GMT+02:00 Jonathan Haddad <jon@jonhaddad.com>:
>  
> This may not help you with the migration, but it may with maintenance & management.
 I just put up a blog post on managing VPC security groups with a tool I open sourced at my
previous company.  If you're going to have different VPCs (staging / prod), it might help
with managing security groups.
>  
> http://rustyrazorblade.com/2014/06/an-introduction-to-roadhouse/
>  
> Semi shameless plug... but relevant.
>  
> 
> On Thu, Jun 5, 2014 at 12:01 PM, Aiman Parvaiz <aiman@shift.com> wrote:
> Cool, thanks again for this.
>  
> 
> On Thu, Jun 5, 2014 at 11:51 AM, Michael Theroux <mtheroux2@yahoo.com> wrote:
> You can have a ring spread across EC2 and the public subnet of a VPC.  That is how we
did our migration.  In our case, we simply replaced the existing EC2 node with a new instance
in the public VPC, restored from a backup taken right before the switch.
>  
> -Mike
>  
> From: Aiman Parvaiz <aiman@shift.com>
> To: Michael Theroux <mtheroux2@yahoo.com> 
> Cc: "user@cassandra.apache.org" <user@cassandra.apache.org> 
> Sent: Thursday, June 5, 2014 2:39 PM
> Subject: Re: VPC AWS
>  
> Thanks for this info Michael. As far as restoring node in public VPC is concerned I was
thinking ( and I might be wrong here) if we can have a ring spread across EC2 and public subnet
of a VPC, this way I can simply decommission nodes in Ec2 as I gradually introduce new nodes
in public subnet of VPC and I will end up with a ring in public subnet and then migrate them
from public to private in a similar way may be.
>  
> If anyone has any experience/ suggestions with this please share, would really appreciate
it.
>  
> Aiman
>  
> 
> On Thu, Jun 5, 2014 at 10:37 AM, Michael Theroux <mtheroux2@yahoo.com> wrote:
> The implementation of moving from EC2 to a VPC was a bit of a juggling act.  Our motivation
was two fold:
>  
> 1) We were running out of static IP addresses, and it was becoming increasingly difficult
in EC2 to design around limiting the number of static IP addresses to the number of public
IP addresses EC2 allowed
> 2) VPC affords us an additional level of security that was desirable.
>  
> However, we needed to consider the following limitations:
>  
> 1) By default, you have a limited number of available public IPs for both EC2 and VPC.
 
> 2) AWS security groups need to be configured to allow traffic for Cassandra to/from instances
in EC2 and the VPC.
>  
> You are correct at the high level that the migration goes from EC2->Public VPC (VPC
with an Internet Gateway)->Private VPC (VPC with a NAT).  The first phase was moving instances
to the public VPC, setting broadcast and seeds to the public IPs we had available.  Basically:
>  
> 1) Take down a node, taking a snapshot for a backup
> 2) Restore the node on the public VPC, assigning it to the correct security group, manually
setting the seeds to other available nodes
> 3) Verify the cluster can communicate
> 4) Repeat
>  
> Realize the NAT instance on the private subnet will also require a public IP.  What got
really interesting is that near the end of the process we ran out of available IPs, requiring
us to switch the final node that was on EC2 directly to the private VPC (and taking down two
nodes at once, which our setup allowed given we had 6 nodes with an RF of 3).  
>  
> What we did, and highly suggest for the switch, is to write down every step that has
to happen on every node during the switch.  In our case, many of the moved nodes required
slightly different configurations for items like the seeds.
>  
> Its been a couple of years, so my memory on this maybe a little fuzzy :)
>  
> -Mike
>  
> From: Aiman Parvaiz <aiman@shift.com>
> To: user@cassandra.apache.org; Michael Theroux <mtheroux2@yahoo.com> 
> Sent: Thursday, June 5, 2014 12:55 PM
> Subject: Re: VPC AWS
>  
> Michael, 
> Thanks for the response, I am about to head in to something very similar if not exactly
same. I envision things happening on the same lines as you mentioned. 
> I would be grateful if you could please throw some more light on how you went about switching
cassandra nodes from public subnet to private with out any downtime.
> I have not started on this project yet, still in my research phase. I plan to have a
ec2+public VPC cluster and then decomission ec2 nodes to have everything in public subnet,
next would be to move it to private subnet.
>  
> Thanks
>  
> 
> On Thu, Jun 5, 2014 at 8:14 AM, Michael Theroux <mtheroux2@yahoo.com> wrote:
> We personally use the EC2Snitch, however, we don't have the multi-region requirements
you do,
>  
> -Mike
>  
> From: Alain RODRIGUEZ <arodrime@gmail.com>
> To: user@cassandra.apache.org
> Sent: Thursday, June 5, 2014 9:14 AM
> Subject: Re: VPC AWS
>  
> I think you can define VPC subnet to be public (to have public + private IPs) or private
only.
>  
> Any insight regarding snitches ? What snitch do you guys use ?
>  
> 
> 2014-06-05 15:06 GMT+02:00 William Oberman <oberman@civicscience.com>:
> I don't think traffic will flow between "classic" ec2 and vpc directly. There is some
kind of gateway bridge instance that sits between, acting as a NAT.   I would think that would
cause new challenges for:
> -transitions 
> -clients
>  
> Sorry this response isn't heavy on content!  I'm curious how this thread goes...
>  
> Will
>  
> On Thursday, June 5, 2014, Alain RODRIGUEZ <arodrime@gmail.com> wrote:
> Hi guys,
>  
> We are going to move from a cluster made of simple Amazon EC2 servers to a VPC cluster.
We are using Cassandra 1.2.11 and I have some questions regarding this switch and the Cassandra
configuration inside a VPC.
>  
> Actually I found no documentation on this topic, but I am quite sure that some people
are already using VPC. If you can point me to any documentation regarding VPC / Cassandra,
it would be very nice of you. We have only one DC for now, but we need to remain multi DC
compatible, since we will add DC very soon.
>  
> Else, I would like to know if I should keep using EC2MultiRegionSnitch or change the
snitch to anything else.
>  
> What about broadcast/listen ip, seeds...?
>  
> We currently use public ip as for broadcast address and for seeds. We use private ones
for listen address. Machines inside the VPC will only have private IP AFAIK. Should I keep
using a broadcast address ?
>  
> Is there any other incidence when switching to a VPC ?
>  
> Sorry if the topic was already discussed, I was unable to find any useful information...
>  
> 
> -- 
> Will Oberman
> Civic Science, Inc.
> 6101 Penn Avenue, Fifth Floor
> Pittsburgh, PA 15206
> (M) 412-480-7835
> (E) oberman@civicscience.com
>  
>  
> 
>  
>  
> 
>  
>  
> 
>  
> 
> 
>  
> -- 
> Jon Haddad
> http://www.rustyrazorblade.com
> skype: rustyrazorblade


Mime
View raw message