incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alain RODRIGUEZ <arodr...@gmail.com>
Subject Re: VPC AWS
Date Thu, 12 Jun 2014 08:11:11 GMT
Thanks guys for this amount of insights.

Here is what we plan to do in production, let me know what do think about:
>From EC2 classic to Public VPC

First step is to build a new DC and make it join current production
Cassandra cluster.

So steps are :

   1.

   Create X C*1.2.11 nodes inside the public VPC - X being number of
   production nodes
   2.

   Configure them - Make sure that cluster_name in Classic EC2 is the same
   than in public VPC instances, seeds of any node in the new DC must contain
   servers from both DC and auto_bootstrap must be false.
   3.

   Edit the file vim /etc/cassandra/cassandra-rackdc.properties -
   dc_suffix=-temp-vpc
   4.

   Make sure to clean data before starting cassandra, fresh install
   5.

   Make sure cassandra user can write data - ll /raid0/cassandra should
   show "cassandra:cassandra"
   6.

   Make sure ports are open using public IPs (
   http://www.datastax.com/documentation/cassandra/1.2/cassandra/security/secureFireWall_r.html
   )
   7.

   Make nodes to join the ring one by one - service cassandra start
   8.

   Alter keyspace to add new DC -

 ALTER KEYSPACE cassa_teads WITH replication = {

 'class': 'NetworkTopologyStrategy',

 'eu-west-xl': '3',

 'eu-west-temp-vpc': '3'

};

Should start accepting cross DC writes.

   1.

   Make sure everything is running fine - tail -fn100
   /raid0/cassandra/logs/output.log
   2.

   Rebuild each node of the new DC from the old DC - nodetool rebuild
   eu-west-xl
   3.

   Repair all the new DC - nodetool repair -pr

----------------------------------------------------------------------------------------------------------------------------

   1.

   Migrate all client services inside the VPC and use eu-west-temp-vpc C*
   servers

----------------------------------------------------------------------------------------------------------------------------

   1.

   Alter keyspace to drop old DC

ALTER KEYSPACE cassa_teads WITH replication = {

 'class': 'NetworkTopologyStrategy',

 'eu-west-xl': '0', (Or maybe can I simply drop this line ?)

 'eu-west-temp-vpc': '3'

};

   1.

   Decommission old nodes one by one - nodetool decommission

 From Public VPC to Private VPC

Next step is to switch from a public VPC cassandra  to a backend, isolated
private VPC (i.e no public interface).

As there is no public IP on private VPC, we cannot use EC2MultiRegionSnitch
for the new DC. We will use EC2Snitch, which works well in local DC and
*should* also work with multi regions connected through a VPN tunnel (we
will need to test this part). This also means we will not use public IPs in
cassandra configuration (broadcast_address should drop, seeds go to private
IP regarding nodes in the new DC)

So steps are :

   1.

   Create X C*1.2.11 nodes inside the private VPC - X being number of
   production nodes
   2.

   Configure them - Make sure that cluster_name in Public VPC is the same
   than in private VPC instances, seeds of any node in the new DC must contain
   servers from both DC (Mixed private / public IPs, public for old DC,
   public servers and private for new DC, in private subnet) and
   auto_bootstrap must be false.
   3.

   Edit the file vim /etc/cassandra/cassandra-rackdc.properties -
   dc_suffix=-vpc
   4.

   Make sure to clean data before starting cassandra, fresh install
   5.

   Make sure cassandra user can write data - ll /raid0/cassandra should
   show "cassandra:cassandra"
   6.

   Make sure ports are open using public IPs (
   http://www.datastax.com/documentation/cassandra/1.2/cassandra/security/secureFireWall_r.html
   )
   7.

   Make nodes to join the ring one by one - service cassandra start
   8.

   Alter keyspace to add new DC -

 ALTER KEYSPACE cassa_teads WITH replication = {

 'class': 'NetworkTopologyStrategy',

 'eu-west-vpc': '3',

 'eu-west-temp-vpc': '3'

};

Should start accepting cross DC writes.

   1.

   Make sure everything is running fine - tail -fn100
   /raid0/cassandra/logs/output.log
   2.

   Rebuild each node of the new DC through - nodetool rebuild
   eu-west-temp-vpc
   3.

   Repair all the new DC - nodetool repair -pr

----------------------------------------------------------------------------------------------------------------------------

   1.

   All services must now use the private VPC (use eu-west-vpc C* servers)

----------------------------------------------------------------------------------------------------------------------------


   1.

   Alter keyspace to drop old DC

ALTER KEYSPACE cassa_teads WITH replication = {

 'class': 'NetworkTopologyStrategy',

 'eu-west-vpc': '3',

 'eu-west-temp-vpc': '0'

};

   1.

   Then

ALTER KEYSPACE cassa_teads WITH replication = {

 'class': 'NetworkTopologyStrategy',

 'eu-west-vpc': '3',

};

   1.

   Decommission old nodes one by one - nodetool decommission



Any insight is of course welcome once again. When we will have ran this in
production I will let you know how things went anyway :).

wish me good luck !


2014-06-11 20:11 GMT+02:00 Ackerman, Mitchell <Mitchell.Ackerman@pgi.com>:

>  FYI, we have established an OpenVPN/NAT mesh between our regions with
> good success.
>
>
>
> *From:* Peter Sanford [mailto:psanford@retailnext.net]
> *Sent:* Wednesday, June 11, 2014 8:57 AM
>
> *To:* user@cassandra.apache.org
> *Subject:* Re: VPC AWS
>
>
>
> Tinc's developers acknowledge that there are some fairly serious unfixed
> security issues in their protocol: http://www.tinc-vpn.org/security/. As
> such, I do not consider tinc to be a good choice for production systems.
>
>
>
> Either IPSec or OpenVPN are reasonable for connecting VPCs in different
> regions, and Amazon has published guides for both methods[1][2]. We use
> IPSec because we have a lot of experience with it, but I'm hesitant to
> recommend it because it is easy to configure in an insecure manner.
>
>
>
> [1]: https://aws.amazon.com/articles/5472675506466066
>
> [2]: https://aws.amazon.com/articles/0639686206802544
>
>
>
> On Tue, Jun 10, 2014 at 6:29 PM, Ben Bromhead <ben@instaclustr.com> wrote:
>
> Have a look at http://www.tinc-vpn.org/, mesh based and handles multiple
> gateways for the same network in a graceful manner (so you can run two
> gateways per region for HA).
>
>
>
> Also supports NAT traversal if you need to do public-private clusters.
>
>
>
> We are currently evaluating it for our managed Cassandra in a VPC
> solution, but we haven't ever used it in a production environment or with a
> heavy load, so caveat emptor.
>
>
>
> As for the snitch... the GPFS is definitely the most flexible.
>
>
>
> Ben Bromhead
>
> Instaclustr | www.instaclustr.com | @instaclustr
> <http://twitter.com/instaclustr> | +61 415 936 359
>
>
>
> On 10 Jun 2014, at 1:42 am, Ackerman, Mitchell <Mitchell.Ackerman@pgi.com>
> wrote:
>
>
>
>   Peter,
>
>
>
> I too am working on setting up a multi-region VPC Cassandra cluster.  Each
> region is connected to each other via an OpenVPN tunnel, so we can use
> internal IP addresses for both the seeds and broadcast address.   This
> allows us to use the EC2Snitch (my interpretation of the caveat that this
> snitch won't work in a multi-region environment is that it won't work if
> you can't use internal IP addresses, which we can via the VPN tunnels).
>  All the C* nodes find each other, and nodetool (or OpsCenter) shows that
> we have established a multi-datacenter cluster.
>
>
>
> Thus far, I'm not happy with the performance of the cluster in such a
> configuration, but I don't think that it is related to this configuration,
> though it could be.
>
>
>
> Mitchell
>
>
>
> *From:* Peter Sanford [mailto:psanford@retailnext.net
> <psanford@retailnext.net>]
> *Sent:* Monday, June 09, 2014 7:19 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: VPC AWS
>
>
>
> Your general assessments of the limitations of the Ec2 snitches seem to
> match what we've found. We're currently using the
> GossipingPropertyFileSnitch in our VPCs. This is also the snitch to use if
> you ever want to have a DC in EC2 and a DC with another hosting provider.
>
>
>
> -Peter
>
>
>
> On Mon, Jun 9, 2014 at 5:48 AM, Alain RODRIGUEZ <arodrime@gmail.com>
> wrote:
>
> Hi guys, there is a lot of answer, it looks like this subject is
> interesting a lot of people, so I will end up letting you know how it went
> for us.
>
>
>
> For now, we are still doing some tests.
>
>
>
> Yet I would like to know how we are supposed to configure Cassandra in
> this environment :
>
>
>
> - VPC
>
> - Multiple datacenters (should be VPCs, one per region, linked through VPN
> ?)
>
> - Cassandra 1.2
>
>
>
> We are currently running under EC2MultiRegionSnitch, but with no VPC. Our
> VPC will have no public interface, so I am not sure how to configure
> broadcast address or seeds that are supposed to be the public IP of the
> node.
>
>
>
> I could use EC2Snitch, but will cross region work properly ?
>
>
>
> Should I use an other snitch ?
>
>
>
> Is someone using a similar configuration ?
>
>
>
> Thanks for information already given guys, we will achieve this ;-).
>
>
>
> 2014-06-07 0:05 GMT+02:00 Jonathan Haddad <jon@jonhaddad.com>:
>
>
>
> This may not help you with the migration, but it may with maintenance &
> management.  I just put up a blog post on managing VPC security groups with
> a tool I open sourced at my previous company.  If you're going to have
> different VPCs (staging / prod), it might help with managing security
> groups.
>
>
>
> http://rustyrazorblade.com/2014/06/an-introduction-to-roadhouse/
>
>
>
> Semi shameless plug... but relevant.
>
>
>
> On Thu, Jun 5, 2014 at 12:01 PM, Aiman Parvaiz <aiman@shift.com> wrote:
>
> Cool, thanks again for this.
>
>
>
> On Thu, Jun 5, 2014 at 11:51 AM, Michael Theroux <mtheroux2@yahoo.com>
> wrote:
>
> You can have a ring spread across EC2 and the public subnet of a VPC.
>  That is how we did our migration.  In our case, we simply replaced the
> existing EC2 node with a new instance in the public VPC, restored from a
> backup taken right before the switch.
>
>
>
> -Mike
>
>
>    ------------------------------
>
> *From:* Aiman Parvaiz <aiman@shift.com>
> *To:* Michael Theroux <mtheroux2@yahoo.com>
> *Cc:* "user@cassandra.apache.org" <user@cassandra.apache.org>
> *Sent:* Thursday, June 5, 2014 2:39 PM
> *Subject:* Re: VPC AWS
>
>
>
> Thanks for this info Michael. As far as restoring node in public VPC is
> concerned I was thinking ( and I might be wrong here) if we can have a ring
> spread across EC2 and public subnet of a VPC, this way I can simply
> decommission nodes in Ec2 as I gradually introduce new nodes in public
> subnet of VPC and I will end up with a ring in public subnet and then
> migrate them from public to private in a similar way may be.
>
>
>
> If anyone has any experience/ suggestions with this please share, would
> really appreciate it.
>
>
>
> Aiman
>
>
>
> On Thu, Jun 5, 2014 at 10:37 AM, Michael Theroux <mtheroux2@yahoo.com>
> wrote:
>
> The implementation of moving from EC2 to a VPC was a bit of a juggling
> act.  Our motivation was two fold:
>
>
>
> 1) We were running out of static IP addresses, and it was becoming
> increasingly difficult in EC2 to design around limiting the number of
> static IP addresses to the number of public IP addresses EC2 allowed
>
> 2) VPC affords us an additional level of security that was desirable.
>
>
>
> However, we needed to consider the following limitations:
>
>
>
> 1) By default, you have a limited number of available public IPs for both
> EC2 and VPC.
>
> 2) AWS security groups need to be configured to allow traffic for
> Cassandra to/from instances in EC2 and the VPC.
>
>
>
> You are correct at the high level that the migration goes from EC2->Public
> VPC (VPC with an Internet Gateway)->Private VPC (VPC with a NAT).  The
> first phase was moving instances to the public VPC, setting broadcast and
> seeds to the public IPs we had available.  Basically:
>
>
>
> 1) Take down a node, taking a snapshot for a backup
>
> 2) Restore the node on the public VPC, assigning it to the correct
> security group, manually setting the seeds to other available nodes
>
> 3) Verify the cluster can communicate
>
> 4) Repeat
>
>
>
> Realize the NAT instance on the private subnet will also require a public
> IP.  What got really interesting is that near the end of the process we ran
> out of available IPs, requiring us to switch the final node that was on EC2
> directly to the private VPC (and taking down two nodes at once, which our
> setup allowed given we had 6 nodes with an RF of 3).
>
>
>
> What we did, and highly suggest for the switch, is to write down every
> step that has to happen on every node during the switch.  In our case, many
> of the moved nodes required slightly different configurations for items
> like the seeds.
>
>
>
> Its been a couple of years, so my memory on this maybe a little fuzzy :)
>
>
>
> -Mike
>
>
>    ------------------------------
>
> *From:* Aiman Parvaiz <aiman@shift.com>
> *To:* user@cassandra.apache.org; Michael Theroux <mtheroux2@yahoo.com>
> *Sent:* Thursday, June 5, 2014 12:55 PM
> *Subject:* Re: VPC AWS
>
>
>
> Michael,
>
> Thanks for the response, I am about to head in to something very similar
> if not exactly same. I envision things happening on the same lines as you
> mentioned.
>
> I would be grateful if you could please throw some more light on how you
> went about switching cassandra nodes from public subnet to private with out
> any downtime.
>
> I have not started on this project yet, still in my research phase. I plan
> to have a ec2+public VPC cluster and then decomission ec2 nodes to have
> everything in public subnet, next would be to move it to private subnet.
>
>
>
> Thanks
>
>
>
> On Thu, Jun 5, 2014 at 8:14 AM, Michael Theroux <mtheroux2@yahoo.com>
> wrote:
>
> We personally use the EC2Snitch, however, we don't have the multi-region
> requirements you do,
>
>
>
> -Mike
>
>
>    ------------------------------
>
> *From:* Alain RODRIGUEZ <arodrime@gmail.com>
> *To:* user@cassandra.apache.org
>
> *Sent:* Thursday, June 5, 2014 9:14 AM
> *Subject:* Re: VPC AWS
>
>
>
> I think you can define VPC subnet to be public (to have public + private
> IPs) or private only.
>
>
>
> Any insight regarding snitches ? What snitch do you guys use ?
>
>
>
> 2014-06-05 15:06 GMT+02:00 William Oberman <oberman@civicscience.com>:
>
> I don't think traffic will flow between "classic" ec2 and vpc directly.
> There is some kind of gateway bridge instance that sits between, acting as
> a NAT.   I would think that would cause new challenges for:
>
> -transitions
>
> -clients
>
>
>
> Sorry this response isn't heavy on content!  I'm curious how this thread
> goes...
>
>
>
> Will
>
>
>
> On Thursday, June 5, 2014, Alain RODRIGUEZ <arodrime@gmail.com> wrote:
>
> Hi guys,
>
>
>
> We are going to move from a cluster made of simple Amazon EC2 servers to a
> VPC cluster. We are using Cassandra 1.2.11 and I have some questions
> regarding this switch and the Cassandra configuration inside a VPC.
>
>
>
> Actually I found no documentation on this topic, but I am quite sure that
> some people are already using VPC. If you can point me to any documentation
> regarding VPC / Cassandra, it would be very nice of you. We have only one
> DC for now, but we need to remain multi DC compatible, since we will add DC
> very soon.
>
>
>
> Else, I would like to know if I should keep using EC2MultiRegionSnitch or
> change the snitch to anything else.
>
>
>
> What about broadcast/listen ip, seeds...?
>
>
>
> We currently use public ip as for broadcast address and for seeds. We use
> private ones for listen address. Machines inside the VPC will only have
> private IP AFAIK. Should I keep using a broadcast address ?
>
>
>
> Is there any other incidence when switching to a VPC ?
>
>
>
> Sorry if the topic was already discussed, I was unable to find any useful
> information...
>
>
>
> --
> Will Oberman
> Civic Science, Inc.
> 6101 Penn Avenue, Fifth Floor
> Pittsburgh, PA 15206
> (M) 412-480-7835
> (E) oberman@civicscience.com
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Jon Haddad
> http://www.rustyrazorblade.com
>
> skype: rustyrazorblade
>
>
>
>
>

Mime
View raw message