Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B8BA210983 for ; Mon, 30 Dec 2013 10:42:17 +0000 (UTC) Received: (qmail 60015 invoked by uid 500); 30 Dec 2013 10:42:13 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 59989 invoked by uid 500); 30 Dec 2013 10:42:12 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 59981 invoked by uid 99); 30 Dec 2013 10:42:12 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 30 Dec 2013 10:42:12 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy includes SPF record at spf.trusted-forwarder.org) Received: from [209.85.223.176] (HELO mail-ie0-f176.google.com) (209.85.223.176) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 30 Dec 2013 10:42:03 +0000 Received: by mail-ie0-f176.google.com with SMTP id at1so11425135iec.7 for ; Mon, 30 Dec 2013 02:41:42 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=GLLiPMYUmAF6532/LOWnnYUVG1hNXvOni9ZcRnz0SN0=; b=lG9GAGfOTbjp49gG6SRnjKf1aqvQs2Up/UM99KeNN1Da5HtNqt5K3uWZWCJ1rz29RC rPaCD5WElRf2g7mln5tFH5UKXW3XmQMEdXFv3KTbX3nStUuqT6bHFlTwAmBkIP/LwWOU KL5X4+/p8M6o+CMHR1xJ9juslJsar6Cgtbbk5/YvHH0tPaM9HiG+HGHMlcsIRRx6y3cS 7NEo6/fX5FUMOThrXbI8mKDWqBXvc7DoSslNAKagQku9lckbqyyVs7EZrnV6/C2h42AA ANPJyvDV3tHUltB5Cw/tm0JRXcbt2VNokjZSn2HpCTObMiXgHyNzkG3l16yRtAzIFc2L QDNg== X-Gm-Message-State: ALoCoQmh/nN0ddkuSWnUWgEoG9mbGV/KJjyueGINX+ljRz5G3bveuCPJYjzzmNNbOIQQC6zuucnP MIME-Version: 1.0 X-Received: by 10.50.4.65 with SMTP id i1mr55874277igi.9.1388400102806; Mon, 30 Dec 2013 02:41:42 -0800 (PST) Received: by 10.64.33.50 with HTTP; Mon, 30 Dec 2013 02:41:42 -0800 (PST) X-Originating-IP: [46.182.184.18] Received: by 10.64.33.50 with HTTP; Mon, 30 Dec 2013 02:41:42 -0800 (PST) In-Reply-To: References: Date: Mon, 30 Dec 2013 05:41:42 -0500 Message-ID: Subject: Re: Upgrading 1.1 to 1.2 in-place From: Tupshin Harper To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=001a11c2a3a0ef360604eebe1625 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c2a3a0ef360604eebe1625 Content-Type: text/plain; charset=ISO-8859-1 No. This is not going to work. The vnodes feature requires the murmur3 partitioner which was introduced with Cassandra 1.2. Since you are currently using 1.1, you must be using the random partitioner, which is not compatible with vnodes. Because the partitioner determines the physical layout of all of your data on disk and across the cluster, it is not possible to change partitioner without taking some downtime to rewrite all of your data. You should probably plan on an upgrade to 1.2 but without also switching to vnodes at this point. -Tupshin On Dec 30, 2013 9:46 AM, "Katriel Traum" wrote: > Hello list, > > I have a 2 DC set up with DC1:3, DC2:3 replication factor. DC1 has 6 > nodes, DC2 has 3. This whole setup runs on AWS, running cassandra 1.1. > Here's my nodetool ring: > 1.1.1.1 eu-west 1a Up Normal 55.07 GB 50.00% > 0 > 2.2.2.1 us-east 1b Up Normal 107.82 GB 100.00% > 1 > 1.1.1.2 eu-west 1b Up Normal 53.98 GB 50.00% > 28356863910078205288614550619314017622 > 1.1.1.3 eu-west 1c Up Normal 54.85 GB 50.00% > 56713727820156410577229101238628035242 > 2.2.2.2 us-east 1d Up Normal 107.25 GB 100.00% > 56713727820156410577229101238628035243 > 1.1.1.4 eu-west 1a Up Normal 54.99 GB 50.00% > 85070591730234615865843651857942052863 > 1.1.1.5 eu-west 1b Up Normal 55.1 GB 50.00% > 113427455640312821154458202477256070484 > 2.2.2.3 us-east 1e Up Normal 106.78 GB 100.00% > 113427455640312821154458202477256070485 > 1.1.1.6 eu-west 1c Up Normal 55.01 GB 50.00% > 141784319550391026443072753096570088105 > > > I am going to upgrade my machine type, upgrade to 1.2 and change the > 6-node to 3 nodes. I will have to do it on the live system. > I'd appreciate any comments about my plan. > 1. Decommission a 1.1 node. > 2. Bootstrap a new one in-place, cassandra 1.2, vnodes enabled (I am > trying to avoid a re-balance later on). > 3. When done, decommission nodes 4-6 at DC1 > > Issues i've spotted: > 1. I'm guessing I will have an unbalanced cluster for the time period > where I have 1.2+vnodes and 1.1 mixed. > 2. Rollback is cumbersome, snapshots won't help here. > > Any feedback appreciated > > Katriel > > --001a11c2a3a0ef360604eebe1625 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

No.=A0 This is not going to work.=A0 The vnodes feature requ= ires the murmur3 partitioner which was introduced with Cassandra 1.2.

Since you are currently using 1.1, you must be using the ran= dom partitioner, which is not compatible with vnodes.

Because the partitioner determines the physical layout of al= l of your data on disk and across the cluster, it is not possible to change= partitioner without taking some downtime to rewrite all of your data.

You should probably plan on an upgrade to 1.2 but without al= so switching to vnodes at this point.=A0

-Tupshin

On Dec 30, 2013 9:46 AM, "Katriel Traum&quo= t; <katriel@google.com> wro= te:
Hello list,

I have a 2 DC set up with D= C1:3, DC2:3 replication factor. DC1 has 6 nodes, DC2 has 3. This whole setu= p runs on AWS, running cassandra 1.1.
Here's my nodetool ring= :
1.1.1.1 =A0eu-west =A0 =A0 = 1a =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A055.07 GB =A0 =A0 =A0 =A050.00% = =A0 =A0 =A0 =A0 =A0 =A0 =A00
2.2.2.1 =A0us-east =A0 =A0 1b =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal= =A0107.82 GB =A0 =A0 =A0 100.00% =A0 =A0 =A0 =A0 =A0 =A0 1
1.1.1.2 =A0eu-west =A0 =A0 1b = =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A053.98 GB =A0 =A0 =A0 =A050.00% =A0 = =A0 =A0 =A0 =A0 =A0 =A028356863910078205288614550619314017622
<= div>1.1.1.3 =A0eu-west =A0 =A0 1c =A0= =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A054.85 GB =A0 =A0 =A0 =A050.00% =A0 =A0= =A0 =A0 =A0 =A0 =A056713727820156410577229101238628035242
2.2.2.2 =A0us-east =A0 =A0 1d = =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A0107.25 GB =A0 =A0 =A0 100.00% =A0 = =A0 =A0 =A0 =A0 =A0 56713727820156410577229101238628035243
1.1.1.4 =A0eu-west =A0 =A0 1a =A0 = =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A054.99 GB =A0 =A0 =A0 =A050.00% =A0 =A0 = =A0 =A0 =A0 =A0 =A085070591730234615865843651857942052863
1.1.1.5 =A0eu-west =A0 =A0 1b = =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A055.1 GB =A0 =A0 =A0 =A0 50.00% =A0 = =A0 =A0 =A0 =A0 =A0 =A0113427455640312821154458202477256070484
=
2.2.2.3 =A0us-east =A0 =A0 1e = =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A0106.78 GB =A0 =A0 =A0 100.00% =A0 = =A0 =A0 =A0 =A0 =A0 113427455640312821154458202477256070485
1.1.1.6 =A0eu-west =A0 =A0 1c = =A0 =A0 =A0 =A0 =A0Up =A0 =A0 Normal =A055.01 GB =A0 =A0 =A0 =A050.00% =A0 = =A0 =A0 =A0 =A0 =A0 =A0141784319550391026443072753096570088105
=


I am going to upgrade my machine t= ype, upgrade to 1.2 and change the 6-node to 3 nodes. I will have to do it = on the live system.
I'd appreciate any comments about my plan.
1. Decommissi= on a 1.1 node.
2. Bootstrap a new one in-place, cassandra 1.2= , vnodes enabled (I am trying to avoid a re-balance later on).
3. When done, decommission nodes 4-6 at DC1

Issues= i've spotted:
1. I'm guessing I will have an unbalanced = cluster for the time period where I have 1.2+vnodes and 1.1 mixed.
2. Rollback is cumbersome, snapshots won't help here.
Any feedback appreciated

Katriel

--001a11c2a3a0ef360604eebe1625--