From user-return-34596-apmail-cassandra-user-archive=cassandra.apache.org@cassandra.apache.org Thu Jun 13 20:25:28 2013 Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id F20A6CFD1 for ; Thu, 13 Jun 2013 20:25:28 +0000 (UTC) Received: (qmail 97841 invoked by uid 500); 13 Jun 2013 20:25:26 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 97817 invoked by uid 500); 13 Jun 2013 20:25:26 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 97809 invoked by uid 99); 13 Jun 2013 20:25:26 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 13 Jun 2013 20:25:26 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_SOFTFAIL X-Spam-Check-By: apache.org Received-SPF: softfail (nike.apache.org: transitioning domain of dan@iqtell.com does not designate 66.46.182.54 as permitted sender) Received: from [66.46.182.54] (HELO relay.ihostexchange.net) (66.46.182.54) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 13 Jun 2013 20:25:19 +0000 Received: from VMBX125.ihostexchange.net ([192.168.40.5]) by HUB104.ihostexchange.net ([66.46.182.54]) with mapi; Thu, 13 Jun 2013 16:24:57 -0400 From: Dan Kogan To: "user@cassandra.apache.org" Date: Thu, 13 Jun 2013 16:24:40 -0400 Subject: RE: Looking for a fully working AWS multi DC configuration. Thread-Topic: Looking for a fully working AWS multi DC configuration. Thread-Index: Ac5iNf1JeW5oTh/xR1em37YE0+SndgGPeOlQ Message-ID: <60B572D9298D944580F7D51195DD30804663EC3B33@VMBX125.ihostexchange.net> References: <60B572D9298D944580F7D51195DD30804663EC3720@VMBX125.ihostexchange.net> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: multipart/alternative; boundary="_000_60B572D9298D944580F7D51195DD30804663EC3B33VMBX125ihoste_" MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org --_000_60B572D9298D944580F7D51195DD30804663EC3B33VMBX125ihoste_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable For the ones that need access by public IP we have not found a way to autom= ate it. Would be curious to know if anyone else has been able to that. In the case of access by private IP we just specify security group as the s= ource. From: Alain RODRIGUEZ [mailto:arodrime@gmail.com] Sent: Wednesday, June 05, 2013 5:45 PM To: user@cassandra.apache.org Subject: Re: Looking for a fully working AWS multi DC configuration. Do you open all these nodes one by one on every Security Group in each regi= on every time you add a node or did you manage to automate it somehow ? 2013/6/5 Dan Kogan > Hi, We are using a very similar configuration. From our experience, Cassandra = nodes in the same DC need access over both public and private IP on the sto= rage port (7000/7001). Nodes from other DC will need access over public IP= on the storage port. All Cassandra nodes also need access over the public IP on the Thrift port = (9160). Dan From: Alain RODRIGUEZ [mailto:arodrime@gmail.com= ] Sent: Wednesday, June 05, 2013 9:49 AM To: user@cassandra.apache.org Subject: Looking for a fully working AWS multi DC configuration. Hi, We use to work on a single DC (EC2Snitch / SimpleStrategy). For latency rea= son we had top open a new DC in the US (us-east). We run C* 1.2.2. We don't= use VPC. Now we use: - 2 DC (eu-west, us-east) - EC2MultiRegionSnitch / NTS - public IPs as broadcast_address and seeds - private IPs as listen_address Yet we are experimenting some troubles (node can't reach itself, Could not = start register mbean in JMX...), mainly because of the use of public IPs an= d the AWS inter-region communication. If someone has successfully setup this kind of cluster, I would like to kno= w, if our configuration is correct and if I am missing something. I also would like to know what ports I have to open and either where I have= to open them from. Any insight would be greatly appreciated. --_000_60B572D9298D944580F7D51195DD30804663EC3B33VMBX125ihoste_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

For the o= nes that need access by public IP we have not found a way to automate it.&n= bsp; Would be curious to know if anyone else has been able to that.

In the case of access by private = IP we just specify security group as the source.

 

= From:= Alain RODRIGUEZ [mailto:arodrime@gmail.com]
Sent: Wednesday, Ju= ne 05, 2013 5:45 PM
To: user@cassandra.apache.org
Subject:<= /b> Re: Looking for a fully working AWS multi DC configuration.<= /span>

 

Do you open all these nodes one by one on every Security Group in each r= egion every time you add a node or did you manage to automate it somehow ?<= o:p>

 

2013/6/5 Dan Kogan <dan@iqtell.com>

Hi,

&= nbsp;

We are using a very similar config= uration.  From our experience, Cassandra nodes in the same DC need acc= ess over both public and private IP on the storage port (7000/7001).  = Nodes from other DC will need access over public IP on the storage port.

All Cassandra nodes also need access over = the public IP on the Thrift port (9160).

 

Dan

=  

From: Alain RODRIGUEZ [mailto:arodrime@gmail.com] =
Sent: Wednesday, June 05, 2013 9:49 AM
To: user@cassandra.apache.o= rg
Subject: Looking for a fully working AWS multi DC configur= ation.

 

Hi,

 

W= e use to work on a single DC (EC2Snitch / SimpleStrategy). For latency reas= on we had top open a new DC in the US (us-east). We run C* 1.2.2. We don't = use VPC.

 

Now we use:

- 2 DC (eu-west, us-east)<= o:p>

- EC2MultiRegionSnitch / NTS

=

- public IPs as broadcast_address and seeds

- private IPs as listen_address

 

Yet we are experimenting som= e troubles (node can't reach itself, Could not start register mbean in JMX.= ..), mainly because of the use of public IPs and the AWS inter-region commu= nication.

 

<= p class=3DMsoNormal style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:= auto'>If someone has successfully setup this kind of cluster, I would like = to know, if our configuration is correct and if I am missing something.

 

I also = would like to know what ports I have to open and either where I have to ope= n them from.

 

Any insight would be greatly appreciated.

 

<= /div>
= --_000_60B572D9298D944580F7D51195DD30804663EC3B33VMBX125ihoste_--