Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id AED8E10E38 for ; Mon, 30 Dec 2013 14:02:28 +0000 (UTC) Received: (qmail 89865 invoked by uid 500); 30 Dec 2013 14:02:25 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 89557 invoked by uid 500); 30 Dec 2013 14:02:23 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 89548 invoked by uid 99); 30 Dec 2013 14:02:22 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 30 Dec 2013 14:02:22 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of edlinuxguru@gmail.com designates 74.125.82.173 as permitted sender) Received: from [74.125.82.173] (HELO mail-we0-f173.google.com) (74.125.82.173) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 30 Dec 2013 14:02:18 +0000 Received: by mail-we0-f173.google.com with SMTP id u57so10094277wes.18 for ; Mon, 30 Dec 2013 06:01:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=DG4QTPVXY0nkQVhG1veMruOM/J99M3Wp7nMyC/KZsuI=; b=eaRjmI2C5FZHo9eefrpWqki1c9CL7ZI/KPl1dsrrTbloqKs4g8xP4Dc3uT4jfXgy8g nNPZ/6RyksALsOYfl42IIhyjhhWCpKDzGL8eqG1I02D8KLiW3JgSGzGD8Ny4ed1Q3YH0 rd+77m/Ve7u6AUUJRtAua12zS6tfpRedtMCQIzB5r1S8i2Hvv83QtLrXhyQ7HAaIwAYk g2UNsLkZCsmh3Ucgd1AfgOovN6hYFNFMDE+aV2It76cHBdlJQe3IRsIP+Q3rzgOEcR+N Anc3lS7xojS1GOsAg4fqzi/SLJWnVDAvAQpKDs5VTkPWRrLDIXtNW11YwDu7OJIW/qIH n8eQ== MIME-Version: 1.0 X-Received: by 10.180.19.72 with SMTP id c8mr44607154wie.24.1388412116768; Mon, 30 Dec 2013 06:01:56 -0800 (PST) Received: by 10.194.220.105 with HTTP; Mon, 30 Dec 2013 06:01:56 -0800 (PST) In-Reply-To: References: Date: Mon, 30 Dec 2013 09:01:56 -0500 Message-ID: Subject: Re: Upgrading 1.1 to 1.2 in-place From: Edward Capriolo To: "user@cassandra.apache.org" Content-Type: multipart/alternative; boundary=bcaec53d5ae5059d8f04eec0e3f9 X-Virus-Checked: Checked by ClamAV on apache.org --bcaec53d5ae5059d8f04eec0e3f9 Content-Type: text/plain; charset=ISO-8859-1 What is the technical limitation that vnodes need murmer? That seems uncool for long time users? On Monday, December 30, 2013, Jean-Armel Luce wrote: > Hi, > > I don't know how your application works, but I explained during the last Cassandra Summit Europe how we did the migration from relational database to Cassandra without any interruption of service. > > You can have a look at the video C* Summit EU 2013: The Cassandra Experience at Orange > > And use the mod-dup module https://github.com/Orange-OpenSource/mod_dup > > For copying data from your Cassandra cluster 1.1 to the Cassandra cluster 1.2, you can backup your data and then use sstableloader (in this case, you will not have to modify the timestamp as I did for the migration from relational to Cassandra). > > Hope that helps !! > > Jean Armel > > > > 2013/12/30 Tupshin Harper >> >> No. This is not going to work. The vnodes feature requires the murmur3 partitioner which was introduced with Cassandra 1.2. >> >> Since you are currently using 1.1, you must be using the random partitioner, which is not compatible with vnodes. >> >> Because the partitioner determines the physical layout of all of your data on disk and across the cluster, it is not possible to change partitioner without taking some downtime to rewrite all of your data. >> >> You should probably plan on an upgrade to 1.2 but without also switching to vnodes at this point. >> >> -Tupshin >> >> On Dec 30, 2013 9:46 AM, "Katriel Traum" wrote: >>> >>> Hello list, >>> I have a 2 DC set up with DC1:3, DC2:3 replication factor. DC1 has 6 nodes, DC2 has 3. This whole setup runs on AWS, running cassandra 1.1. >>> Here's my nodetool ring: >>> 1.1.1.1 eu-west 1a Up Normal 55.07 GB 50.00% 0 >>> 2.2.2.1 us-east 1b Up Normal 107.82 GB 100.00% 1 >>> 1.1.1.2 eu-west 1b Up Normal 53.98 GB 50.00% 28356863910078205288614550619314017622 >>> 1.1.1.3 eu-west 1c Up Normal 54.85 GB 50.00% 56713727820156410577229101238628035242 >>> 2.2.2.2 us-east 1d Up Normal 107.25 GB 100.00% 56713727820156410577229101238628035243 >>> 1.1.1.4 eu-west 1a Up Normal 54.99 GB 50.00% 85070591730234615865843651857942052863 >>> 1.1.1.5 eu-west 1b Up Normal 55.1 GB 50.00% 113427455640312821154458202477256070484 >>> 2.2.2.3 us-east 1e Up Normal 106.78 GB 100.00% 113427455640312821154458202477256070485 >>> 1.1.1.6 eu-west 1c Up Normal 55.01 GB 50.00% 141784319550391026443072753096570088105 >>> >>> I am going to upgrade my machine type, upgrade to 1.2 and change the 6-node to 3 nodes. I will have to do it on the live system. >>> I'd appreciate any comments about my plan. >>> 1. Decommission a 1.1 node. >>> 2. Bootstrap a new one in-place, cassandra 1.2, vnodes enabled (I am trying to avoid a re-balance later on). >>> 3. When done, decommission nodes 4-6 at DC1 >>> Issues i've spotted: >>> 1. I'm guessing I will have an unbalanced cluster for the time period where I have 1.2+vnodes and 1.1 mixed. >>> 2. Rollback is cumbersome, snapshots won't help here. >>> Any feedback appreciated >>> Katriel > > -- Sorry this was sent from mobile. Will do less grammar and spell check than usual. --bcaec53d5ae5059d8f04eec0e3f9 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable What is the technical limitation that vnodes need murmer? That seems uncool= for long time users?

On Monday, December 30, 2013, Jean-Armel Luce = <jaluce06@gmail.com> wrote:=
> Hi,
>
> I don't know how your application works, but I= explained during the last Cassandra Summit Europe how we did the migration= from relational database to Cassandra without any interruption of service.=
>
> You can have a look at the video C* Summit EU 2013: The Cassan= dra Experience at Orange
>
> And use the mod-dup module https://github.com/Orange= -OpenSource/mod_dup
>
> For copying data from your Cassandra cluster 1.1 to the Cassan= dra cluster 1.2, you can backup your data and then use sstableloader (in th= is case, you will not have to modify the timestamp as I did for the migrati= on from relational to Cassandra).
>
> Hope that helps !!
>
> Jean Armel
>
><= br>>
> 2013/12/30 Tupshin Harper <tupshin@tupshin.com>
>>
>> No.=A0 This i= s not going to work.=A0 The vnodes feature requires the murmur3 partitioner= which was introduced with Cassandra 1.2.
>>
>> Since you are currently using 1.1, you must be using t= he random partitioner, which is not compatible with vnodes.
>>
= >> Because the partitioner determines the physical layout of all of y= our data on disk and across the cluster, it is not possible to change parti= tioner without taking some downtime to rewrite all of your data.
>>
>> You should probably plan on an upgrade to 1.2 but with= out also switching to vnodes at this point.=A0
>>
>> -Tup= shin
>>
>> On Dec 30, 2013 9:46 AM, "Katriel Traum&q= uot; <katriel@google.com> w= rote:
>>>
>>> Hello list,
>>> I have a 2 DC set = up with DC1:3, DC2:3 replication factor. DC1 has 6 nodes, DC2 has 3. This w= hole setup runs on AWS, running cassandra 1.1.
>>> Here's m= y nodetool ring:
>>> 1.1.1.1 =A0eu-west =A0 =A0 1a =A0 =A0 =A0 =A0 =A0Up =A0 =A0 No= rmal =A055.07 GB =A0 =A0 =A0 =A050.00% =A0 =A0 =A0 =A0 =A0 =A0 =A00
>= >> 2.2.2.1 =A0us-east =A0 =A0 1b =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal= =A0107.82 GB =A0 =A0 =A0 100.00% =A0 =A0 =A0 =A0 =A0 =A0 1
>>>= 1.1.1.2 =A0eu-west =A0 =A0 1b =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A053.9= 8 GB =A0 =A0 =A0 =A050.00% =A0 =A0 =A0 =A0 =A0 =A0 =A0283568639100782052886= 14550619314017622
>>> 1.1.1.3 =A0eu-west =A0 =A0 1c =A0 =A0 =A0 =A0 =A0Up =A0 =A0 No= rmal =A054.85 GB =A0 =A0 =A0 =A050.00% =A0 =A0 =A0 =A0 =A0 =A0 =A0567137278= 20156410577229101238628035242
>>> 2.2.2.2 =A0us-east =A0 =A0 1d= =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A0107.25 GB =A0 =A0 =A0 100.00% =A0 = =A0 =A0 =A0 =A0 =A0 56713727820156410577229101238628035243
>>> 1.1.1.4 =A0eu-west =A0 =A0 1a =A0 =A0 =A0 =A0 =A0Up =A0 =A0 No= rmal =A054.99 GB =A0 =A0 =A0 =A050.00% =A0 =A0 =A0 =A0 =A0 =A0 =A0850705917= 30234615865843651857942052863
>>> 1.1.1.5 =A0eu-west =A0 =A0 1b= =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A055.1 GB =A0 =A0 =A0 =A0 50.00% =A0= =A0 =A0 =A0 =A0 =A0 =A0113427455640312821154458202477256070484
>>> 2.2.2.3 =A0us-east =A0 =A0 1e =A0 =A0 =A0 =A0 =A0Up =A0 =A0 No= rmal =A0106.78 GB =A0 =A0 =A0 100.00% =A0 =A0 =A0 =A0 =A0 =A0 1134274556403= 12821154458202477256070485
>>> 1.1.1.6 =A0eu-west =A0 =A0 1c = =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A055.01 GB =A0 =A0 =A0 =A050.00% =A0 = =A0 =A0 =A0 =A0 =A0 =A0141784319550391026443072753096570088105
>>>
>>> I am going to upgrade my machine type, upgrade= to 1.2 and change the 6-node to 3 nodes. I will have to do it on the live = system.
>>> I'd appreciate any comments about my plan.
>>> 1. Decommission a 1.1 node.
>>> 2. Bootstrap a new= one in-place, cassandra 1.2, vnodes enabled (I am trying to avoid a re-bal= ance later on).
>>> 3. When done, decommission nodes 4-6 at DC1=
>>> Issues i've spotted:
>>> 1. I'm guessing I= will have an unbalanced cluster for the time period where I have 1.2+vnode= s and 1.1 mixed.
>>> 2. Rollback is cumbersome, snapshots won&#= 39;t help here.
>>> Any feedback appreciated
>>> Katriel
>
&g= t;

--
Sorry this was sent from mobile. Will do less grammar and = spell check than usual.
--bcaec53d5ae5059d8f04eec0e3f9--