Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D2ACB10EA2 for ; Mon, 30 Dec 2013 14:26:17 +0000 (UTC) Received: (qmail 17228 invoked by uid 500); 30 Dec 2013 14:25:17 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 17198 invoked by uid 500); 30 Dec 2013 14:25:16 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 17184 invoked by uid 99); 30 Dec 2013 14:25:14 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 30 Dec 2013 14:25:14 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy includes SPF record at spf.trusted-forwarder.org) Received: from [209.85.213.171] (HELO mail-ig0-f171.google.com) (209.85.213.171) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 30 Dec 2013 14:25:08 +0000 Received: by mail-ig0-f171.google.com with SMTP id c10so35459764igq.4 for ; Mon, 30 Dec 2013 06:24:48 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=etEAClcJKIi7b67rsDm97G6O1k7p5uHmYIWwSKxz7S4=; b=k2Tx/LLX6L85ur1bW6FM0Ihcu1xgll13T6yuiZd8cqfgnJCN+3TixdHYINWSl6fYXT sKQr2fLaypcJMLwyVDc9sYXz9kjfWw/vB+KwN9wZ2cZV2VIh4jhcPV4SIzCTGdavmgfi qOAy8MQFYbmrHTjo5pGIO4pZNDk87aR+6hwjvFX56HVSF6urA5jWI5wkE/K5pRAn5dQY h9WUTTZojeyhkeTpUKH4gKo0PA0e1HP5feWsq6kzmZGIyUjHRMl0UZTOVNfY5om4iblQ x4y9wJ3Jvt3iz6j75850Ncu5+syh3UnMIdFFxOi3DOtGLeTW3Bu9VlPq2CLyGB0Zd96N F8KQ== X-Gm-Message-State: ALoCoQnTmHDin62bHhNk8NaGC1kBonV3qpZco9qL17bgP0ymzBd1QTfcHJpMB6xNHLQIn6BPSo5Z MIME-Version: 1.0 X-Received: by 10.50.50.70 with SMTP id a6mr55240603igo.1.1388413487920; Mon, 30 Dec 2013 06:24:47 -0800 (PST) Received: by 10.64.33.50 with HTTP; Mon, 30 Dec 2013 06:24:47 -0800 (PST) X-Originating-IP: [46.182.184.20] Received: by 10.64.33.50 with HTTP; Mon, 30 Dec 2013 06:24:47 -0800 (PST) In-Reply-To: References: Date: Mon, 30 Dec 2013 09:24:47 -0500 Message-ID: Subject: Re: Upgrading 1.1 to 1.2 in-place From: Tupshin Harper To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=089e010d9d40bfd43d04eec13476 X-Virus-Checked: Checked by ClamAV on apache.org --089e010d9d40bfd43d04eec13476 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Sorry for the misinformation. Totally forgot about that being supported since I've never seen the combination actually used. Correct that it should work, though. On Dec 30, 2013 2:18 PM, "Hannu Kr=F6ger" wrote: > Hi, > > Random Partitioner + VNodes are a supported combo based on DataStax > documentation: > > http://www.datastax.com/documentation/cassandra/1.2/webhelp/cassandra/arc= hitecture/architecturePartitionerAbout_c.html > > How else would you even migrate from 1.1 to Vnodes since migration from > one partitioner to another is such a huge amount of work? > > Cheers, > Hannu > > > 2013/12/30 Edward Capriolo > >> What is the technical limitation that vnodes need murmer? That seems >> uncool for long time users? >> >> >> On Monday, December 30, 2013, Jean-Armel Luce wrote= : >> > Hi, >> > >> > I don't know how your application works, but I explained during the >> last Cassandra Summit Europe how we did the migration from relational >> database to Cassandra without any interruption of service. >> > >> > You can have a look at the video C* Summit EU 2013: The Cassandra >> Experience at Orange >> >> > >> > And use the mod-dup module https://github.com/Orange-OpenSource/mod_du= p >> > >> > For copying data from your Cassandra cluster 1.1 to the Cassandra >> cluster 1.2, you can backup your data and then use sstableloader (in thi= s >> case, you will not have to modify the timestamp as I did for the migrati= on >> from relational to Cassandra). >> > >> > Hope that helps !! >> > >> > Jean Armel >> > >> > >> > >> > 2013/12/30 Tupshin Harper >> >> >> >> No. This is not going to work. The vnodes feature requires the >> murmur3 partitioner which was introduced with Cassandra 1.2. >> >> >> >> Since you are currently using 1.1, you must be using the random >> partitioner, which is not compatible with vnodes. >> >> >> >> Because the partitioner determines the physical layout of all of your >> data on disk and across the cluster, it is not possible to change >> partitioner without taking some downtime to rewrite all of your data. >> >> >> >> You should probably plan on an upgrade to 1.2 but without also >> switching to vnodes at this point. >> >> >> >> -Tupshin >> >> >> >> On Dec 30, 2013 9:46 AM, "Katriel Traum" wrote: >> >>> >> >>> Hello list, >> >>> I have a 2 DC set up with DC1:3, DC2:3 replication factor. DC1 has 6 >> nodes, DC2 has 3. This whole setup runs on AWS, running cassandra 1.1. >> >>> Here's my nodetool ring: >> >>> 1.1.1.1 eu-west 1a Up Normal 55.07 GB >> 50.00% 0 >> >>> 2.2.2.1 us-east 1b Up Normal 107.82 GB >> 100.00% 1 >> >>> 1.1.1.2 eu-west 1b Up Normal 53.98 GB >> 50.00% 28356863910078205288614550619314017622 >> >>> 1.1.1.3 eu-west 1c Up Normal 54.85 GB >> 50.00% 56713727820156410577229101238628035242 >> >>> 2.2.2.2 us-east 1d Up Normal 107.25 GB >> 100.00% 56713727820156410577229101238628035243 >> >>> 1.1.1.4 eu-west 1a Up Normal 54.99 GB >> 50.00% 85070591730234615865843651857942052863 >> >>> 1.1.1.5 eu-west 1b Up Normal 55.1 GB >> 50.00% 113427455640312821154458202477256070484 >> >>> 2.2.2.3 us-east 1e Up Normal 106.78 GB >> 100.00% 113427455640312821154458202477256070485 >> >>> 1.1.1.6 eu-west 1c Up Normal 55.01 GB >> 50.00% 141784319550391026443072753096570088105 >> >>> >> >>> I am going to upgrade my machine type, upgrade to 1.2 and change the >> 6-node to 3 nodes. I will have to do it on the live system. >> >>> I'd appreciate any comments about my plan. >> >>> 1. Decommission a 1.1 node. >> >>> 2. Bootstrap a new one in-place, cassandra 1.2, vnodes enabled (I am >> trying to avoid a re-balance later on). >> >>> 3. When done, decommission nodes 4-6 at DC1 >> >>> Issues i've spotted: >> >>> 1. I'm guessing I will have an unbalanced cluster for the time perio= d >> where I have 1.2+vnodes and 1.1 mixed. >> >>> 2. Rollback is cumbersome, snapshots won't help here. >> >>> Any feedback appreciated >> >>> Katriel >> > >> > >> >> -- >> Sorry this was sent from mobile. Will do less grammar and spell check >> than usual. >> > > --089e010d9d40bfd43d04eec13476 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

Sorry for the misinformation.=A0 Totally forgot about that b= eing supported since I've never seen the combination actually used.=A0 = Correct that it should work, though.

On Dec 30, 2013 2:18 PM, "Hannu Kr=F6ger&qu= ot; <hkroger@gmail.com> wrot= e:
Hi,

Random Partitioner + VNo= des are a supported combo based on DataStax documentation:
http://www.d= atastax.com/documentation/cassandra/1.2/webhelp/cassandra/architecture/arch= itecturePartitionerAbout_c.html

How else wo= uld you even migrate from 1.1 to Vnodes since migration from one partitione= r to another is such a huge amount of work?

Cheers,
Ha= nnu


2013= /12/30 Edward Capriolo <edlinuxguru@gmail.com>
What is the technical limitation that vnodes= need murmer? That seems uncool for long time users?


On Monday, December 30, 2013, Jean-Armel Luce <jaluce06@gmail.com> wrote:
> Hi,
>
> I don't know how your application works, but I= explained during the last Cassandra Summit Europe how we did the migration= from relational database to Cassandra without any interruption of service.=
>
> You can have a look at the video C* Summit EU 2013: The = Cassandra Experience at Orange

>
> And use the mod-du= p module https://github.com/Orange-OpenSource/mod_dup
>
> For copying data from your Cassandra cluster 1.1 to the Cassan= dra cluster 1.2, you can backup your data and then use sstableloader (in th= is case, you will not have to modify the timestamp as I did for the migrati= on from relational to Cassandra).
>
> Hope that helps !!
>
> Jean Armel
>
><= br>>
> 2013/12/30 Tupshin Harper <tupshin@tupshin.com>
>>
>= > No.=A0 This is not going to work.=A0 The vnodes feature requires the m= urmur3 partitioner which was introduced with Cassandra 1.2.
>>
>> Since you are currently using 1.1, you must be using t= he random partitioner, which is not compatible with vnodes.
>>
= >> Because the partitioner determines the physical layout of all of y= our data on disk and across the cluster, it is not possible to change parti= tioner without taking some downtime to rewrite all of your data.
>>
>> You should probably plan on an upgrade to 1.2 but with= out also switching to vnodes at this point.=A0
>>
>> -Tup= shin
>>
>> On Dec 30, 2013 9:46 AM, "Katriel Traum&q= uot; <katriel@go= ogle.com> wrote:
>>>
>>> Hello list,
>>> I have a 2 DC set = up with DC1:3, DC2:3 replication factor. DC1 has 6 nodes, DC2 has 3. This w= hole setup runs on AWS, running cassandra 1.1.
>>> Here's m= y nodetool ring:
>>> 1.1.1.1 =A0eu-west =A0 =A0 1a =A0 =A0 =A0 =A0 =A0Up =A0 =A0 No= rmal =A055.07 GB =A0 =A0 =A0 =A050.00% =A0 =A0 =A0 =A0 =A0 =A0 =A00
>= >> 2.2.2.1 =A0us-east =A0 =A0 1b =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal= =A0107.82 GB =A0 =A0 =A0 100.00% =A0 =A0 =A0 =A0 =A0 =A0 1
>>>= 1.1.1.2 =A0eu-west =A0 =A0 1b =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A053.9= 8 GB =A0 =A0 =A0 =A050.00% =A0 =A0 =A0 =A0 =A0 =A0 =A0283568639100782052886= 14550619314017622
>>> 1.1.1.3 =A0eu-west =A0 =A0 1c =A0 =A0 =A0 =A0 =A0Up =A0 =A0 No= rmal =A054.85 GB =A0 =A0 =A0 =A050.00% =A0 =A0 =A0 =A0 =A0 =A0 =A0567137278= 20156410577229101238628035242
>>> 2.2.2.2 =A0us-east =A0 =A0 1d= =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A0107.25 GB =A0 =A0 =A0 100.00% =A0 = =A0 =A0 =A0 =A0 =A0 56713727820156410577229101238628035243
>>> 1.1.1.4 =A0eu-west =A0 =A0 1a =A0 =A0 =A0 =A0 =A0Up =A0 =A0 No= rmal =A054.99 GB =A0 =A0 =A0 =A050.00% =A0 =A0 =A0 =A0 =A0 =A0 =A0850705917= 30234615865843651857942052863
>>> 1.1.1.5 =A0eu-west =A0 =A0 1b= =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A055.1 GB =A0 =A0 =A0 =A0 50.00% =A0= =A0 =A0 =A0 =A0 =A0 =A0113427455640312821154458202477256070484
>>> 2.2.2.3 =A0us-east =A0 =A0 1e =A0 =A0 =A0 =A0 =A0Up =A0 =A0 No= rmal =A0106.78 GB =A0 =A0 =A0 100.00% =A0 =A0 =A0 =A0 =A0 =A0 1134274556403= 12821154458202477256070485
>>> 1.1.1.6 =A0eu-west =A0 =A0 1c = =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A055.01 GB =A0 =A0 =A0 =A050.00% =A0 = =A0 =A0 =A0 =A0 =A0 =A0141784319550391026443072753096570088105
>>>
>>> I am going to upgrade my machine type, upgrade= to 1.2 and change the 6-node to 3 nodes. I will have to do it on the live = system.
>>> I'd appreciate any comments about my plan.
>>> 1. Decommission a 1.1 node.
>>> 2. Bootstrap a new= one in-place, cassandra 1.2, vnodes enabled (I am trying to avoid a re-bal= ance later on).
>>> 3. When done, decommission nodes 4-6 at DC1=
>>> Issues i've spotted:
>>> 1. I'm guessing I= will have an unbalanced cluster for the time period where I have 1.2+vnode= s and 1.1 mixed.
>>> 2. Rollback is cumbersome, snapshots won&#= 39;t help here.
>>> Any feedback appreciated
>>> Katriel
>
&g= t;

--
Sorry this was s= ent from mobile. Will do less grammar and spell check than usual.

--089e010d9d40bfd43d04eec13476--