Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0BDCC10B1E for ; Thu, 6 Feb 2014 23:29:17 +0000 (UTC) Received: (qmail 64978 invoked by uid 500); 6 Feb 2014 23:29:14 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 64947 invoked by uid 500); 6 Feb 2014 23:29:13 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 64939 invoked by uid 99); 6 Feb 2014 23:29:13 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 06 Feb 2014 23:29:13 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of ailinykh@gmail.com designates 209.85.215.51 as permitted sender) Received: from [209.85.215.51] (HELO mail-la0-f51.google.com) (209.85.215.51) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 06 Feb 2014 23:29:07 +0000 Received: by mail-la0-f51.google.com with SMTP id c6so2042771lan.38 for ; Thu, 06 Feb 2014 15:28:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=ere5iUU1Z0/Q3/PQX98X6EAVSA0pHescoUinACbU9yQ=; b=W3E0fz4HBP+edf4LJ/0eXWRatq7MmDW4yuRTDWHThLcvuwlnUCvW713A9RWm3EiHys 1hGcGT9RqaNFuy+2966vTjed2B3Aqx44ByKaOVdDFNdSyrFsmPCrZ3uCjUxFybWAknYh P/eF7ISGdD0tuWOQ2ktBX5Gm+OZXeanLuxdF6uL4nrssuJsmsFHNXQlNdzbDLt4swI/5 f+XrqytBJw4xcFBQI44h7o+4uTRLmkOO7fKIFdn4bIz2V4hCNn16VVaage6YdZ7Y0AAM trXXQBgzCpqTOBtVxTdAi/f3cl4VqTtJW99hAPBDApXN3p6O/FSoA3uulmAZ3nGW6Ckg Nvsw== MIME-Version: 1.0 X-Received: by 10.112.73.100 with SMTP id k4mr7326741lbv.25.1391729327022; Thu, 06 Feb 2014 15:28:47 -0800 (PST) Received: by 10.114.82.130 with HTTP; Thu, 6 Feb 2014 15:28:46 -0800 (PST) In-Reply-To: References: Date: Thu, 6 Feb 2014 15:28:46 -0800 Message-ID: Subject: Re: Adding datacenter for move to vnodes From: Andrey Ilinykh To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=001a11c243582928b404f1c53c3f X-Virus-Checked: Checked by ClamAV on apache.org --001a11c243582928b404f1c53c3f Content-Type: text/plain; charset=ISO-8859-1 My understanding is you can't mix vnodes and regular nodes in the same DC. Is it correct? On Thu, Feb 6, 2014 at 2:16 PM, Vasileios Vlachos < vasileiosvlachos@gmail.com> wrote: > Hello, > > My question is why would you need another DC to migrate to Vnodes? How > about decommissioning each node in turn, changing the cassandra.yaml > accordingly, delete the data and bring the node back in the cluster and let > it bootstrap from the others? > > We did that recently with our demo cluster. Is that wrong in any way? The > only think to take into consideration is the disk space I think. We are not > using amazon, but I am not sure how would that be different for this > particular issue. > > Thanks, > > Bill > On 6 Feb 2014 16:34, "Alain RODRIGUEZ" wrote: > >> Glad it helps. >> >> Good luck with this. >> >> Cheers, >> >> Alain >> >> >> 2014-02-06 17:30 GMT+01:00 Katriel Traum : >> >>> Thank you Alain! That was exactly what I was looking for. I was worried >>> I'd have to do a rolling restart to change the snitch. >>> >>> Katriel >>> >>> >>> >>> On Thu, Feb 6, 2014 at 1:10 PM, Alain RODRIGUEZ wrote: >>> >>>> Hi, we did this exact same operation here too, with no issue. >>>> >>>> Contrary to Paulo we did not modify our snitch. >>>> >>>> We simply added a "dc_suffix" in the property in >>>> cassandra-rackdc.properties conf file for nodes in the new cluster : >>>> >>>> # Add a suffix to a datacenter name. Used by the Ec2Snitch and >>>> Ec2MultiRegionSnitch >>>> >>>> # to append a string to the EC2 region name. >>>> >>>> dc_suffix=-xl >>>> >>>> So our new cluster DC is basically : eu-west-xl >>>> >>>> I think this is less risky, at least it is easier to do. >>>> >>>> Hope this help. >>>> >>>> >>>> 2014-02-02 11:42 GMT+01:00 Paulo Ricardo Motta Gomes < >>>> paulo.motta@chaordicsystems.com>: >>>> >>>> We had a similar situation and what we did was first migrate the 1.1 >>>>> cluster to GossipingPropertyFileSnitch, making sure that for each node we >>>>> specified the correct availability zone as the rack in >>>>> the cassandra-rackdc.properties. In this way, >>>>> the GossipingPropertyFileSnitch is equivalent to the EC2MultiRegionSnitch, >>>>> so the data location does not change and no repair is needed afterwards. >>>>> So, if your nodes are located in the us-east-1e AZ, your cassandra-rackdc.properties >>>>> should look like: >>>>> >>>>> dc=us-east >>>>> rack=1e >>>>> >>>>> After this step is complete on all nodes, then you can add a new >>>>> datacenter specifying different dc and rack on the >>>>> cassandra-rackdc.properties of the new DC. Make sure you upgrade your >>>>> initial datacenter to 1.2 before adding a new datacenter with vnodes >>>>> enabled (of course). >>>>> >>>>> Cheers >>>>> >>>>> >>>>> On Sun, Feb 2, 2014 at 6:37 AM, Katriel Traum wrote: >>>>> >>>>>> Hello list. >>>>>> >>>>>> I'm upgrading a 1.1 cassandra cluster to 1.2(.13). >>>>>> I've read here and in other places that the best way to migrate to >>>>>> vnodes is to add a new DC, with the same amount of nodes, and run rebuild >>>>>> on each of them. >>>>>> However, I'm faced with the fact that I'm using EC2MultiRegion >>>>>> snitch, which automagically creates the DC and RACK. >>>>>> >>>>>> Any ideas how I can go about adding a new DC with this kind of setup? >>>>>> I need these new machines to be in the same EC2 Region as the current ones, >>>>>> so adding to a new Region is not an option. >>>>>> >>>>>> TIA, >>>>>> Katriel >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> *Paulo Motta* >>>>> >>>>> Chaordic | *Platform* >>>>> *www.chaordic.com.br * >>>>> +55 48 3232.3200 >>>>> +55 83 9690-1314 >>>>> >>>> >>>> >>> >> --001a11c243582928b404f1c53c3f Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
My understanding is you can't mix vnodes and regular n= odes in the same DC. Is it correct?


<= div class=3D"gmail_quote">On Thu, Feb 6, 2014 at 2:16 PM, Vasileios Vlachos= <vasileiosvlachos@gmail.com> wrote:

Hello,

My question is why would you need another DC to migrate to V= nodes? How about decommissioning each node in turn, changing the cassandra.= yaml accordingly, delete the data and bring the node back in the cluster an= d let it bootstrap from the others?

We did that recently with our demo cluster. Is that wrong in= any way? The only think to take into consideration is the disk space I thi= nk. We are not using amazon, but I am not sure how would that be different = for this particular issue.

Thanks,

Bill

On 6 Feb 2014 16:34, "Alain RODRIGUEZ"= <arodrime@gmail= .com> wrote:
Glad it helps.

Good luck with this.

Cheers,

Alain


2014-02-06 17:30= GMT+01:00 Katriel Traum <katriel@google.com>:
Thank you Alain! That was e= xactly what I was looking for. I was worried I'd have to do a rolling r= estart to change the snitch.

Katriel



On Thu, Feb 6, 2014 at 1:10 PM, Alain RO= DRIGUEZ <arodrime@gmail.com> wrote:
Hi, we did this exact same operation here too, with n= o issue.

Contrary to Paulo we did not modify our s= nitch.

We simply added a "dc_suffix" in = the property in cassandra-rackdc.properties conf file for nodes in the new = cluster :

# Add a suffix to a datacenter name. Used by the Ec2Sni= tch and Ec2MultiRegionSnitch

# to append a string = to the EC2 region name.

dc_suffix=3D-xl

So our new cluster DC is basically : eu-west-xl
I think this is less risky, at least it is easier to do.
=

Hope this help.


2014-02-02 11= :42 GMT+01:00 Paulo Ricardo Motta Gomes <paulo.motta@chaordi= csystems.com>:

We had a similar situation and what we di= d was first migrate the 1.1 cluster to GossipingPropertyFileSnitch, making = sure that for each node we specified the correct availability zone as the r= ack in the=A0cassandra-rackdc.properties. In this way, the=A0GossipingPrope= rtyFileSnitch is equivalent to the=A0EC2MultiRegionSnitch, so the data location does not c= hange and no repair is needed afterwards. So, if your nodes are located in = the us-east-1e AZ, your=A0cassandra-rackdc.properties should look li= ke:

dc=3Dus-east
rack=3D1e

After this step is complete on all nodes, then you can add a new d= atacenter specifying different dc and rack on the cassandra-rackdc.properti= es of the new DC. Make sure you upgrade your initial datacenter to 1.2 befo= re adding a new datacenter with vnodes enabled (of course).

Cheers


On Sun, Feb 2, 2014 at 6:37 AM, Katriel Traum <ka= triel@google.com> wrote:
Hello list.

I'm up= grading a 1.1 cassandra cluster to 1.2(.13).
I've read here and in other places that the best way to migrate to= vnodes is to add a new DC, with the same amount of nodes, and run rebuild = on each of them.
However, I'm faced with the fact that I'm using EC2MultiRegion= snitch, which automagically creates the DC and RACK.

<= div>Any ideas how I can go about adding a new DC with this kind of setup? I= need these new machines to be in the same EC2 Region as the current ones, = so adding to a new Region is not an option.

TIA,
Katriel



--
Paul= o Motta





--001a11c243582928b404f1c53c3f--