Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2B16B10D9A for ; Mon, 5 Aug 2013 12:04:27 +0000 (UTC) Received: (qmail 90432 invoked by uid 500); 5 Aug 2013 12:04:22 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 90312 invoked by uid 500); 5 Aug 2013 12:04:21 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 90189 invoked by uid 99); 5 Aug 2013 12:04:19 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 05 Aug 2013 12:04:19 +0000 X-ASF-Spam-Status: No, hits=-2.8 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_HI,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of prvs=922538626=a-hjarraya@expedia.com designates 216.251.112.221 as permitted sender) Received: from [216.251.112.221] (HELO mx1a.expedia.com) (216.251.112.221) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 05 Aug 2013 12:04:12 +0000 X-SRBS: None X-HAT: Sender Group RELAYLIST, Policy $RELAYED applied. X-MailPolicy: Default Outgoing Mail Policy Received: from unknown (HELO CHC-EXCHNA02.SEA.CORP.EXPECN.COM) ([10.184.69.26]) by mx1a.sea.corp.expecn.com with ESMTP; 05 Aug 2013 05:03:51 -0700 Received: from DBCXEXCHHUB001.SEA.CORP.EXPECN.COM (10.128.48.74) by CHC-EXCHNA02.SEA.CORP.EXPECN.COM (10.184.69.26) with Microsoft SMTP Server (TLS) id 8.3.298.1; Mon, 5 Aug 2013 05:03:51 -0700 Received: from DBCXCCR01.SEA.CORP.EXPECN.COM ([10.128.48.135]) by DBCXEXCHHUB001.SEA.CORP.EXPECN.COM ([10.128.48.74]) with mapi; Mon, 5 Aug 2013 13:03:49 +0100 From: Haithem Jarraya To: "user@cassandra.apache.org" Date: Mon, 5 Aug 2013 13:03:48 +0100 Subject: Re: Reducing the number of vnodes Thread-Topic: Reducing the number of vnodes Thread-Index: Ac6R09ekujDcN49IQYODgY1+OOz/Aw== Message-ID: <441BC6E1-C4BD-4FE6-A2D8-BB88A8E4C424@expedia.com> References: <02ee01ce91cf$3a220950$ae661bf0$@struq.com> In-Reply-To: <02ee01ce91cf$3a220950$ae661bf0$@struq.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: multipart/alternative; boundary="_000_441BC6E1C4BD4FE6A2D8BB88A8E4C424expediacom_" MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org --_000_441BC6E1C4BD4FE6A2D8BB88A8E4C424expediacom_ Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable Chris, Which C* version are you running? You might want to do an upgrade to the latest version before reducing the v= node counts, a lot of fixes and improvement went in lately, it might help y= ou getting your repair faster. H On 5 Aug 2013, at 12:30, Christopher Wirt > wrote: Hi, I=92m thinking about reducing the number of vnodes per server. We have 3 DC setup =96 one with 9 nodes, two with 3 nodes each. Each node has 256 vnodes. We=92ve found that repair operations are beginnin= g to take too long. Is reducing the number of vnodes to 64/32 likely to help our situation? What options do I have for achieving this in a live cluster? Thanks, Chris --_000_441BC6E1C4BD4FE6A2D8BB88A8E4C424expediacom_ Content-Type: text/html; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable Chris,
Which C* version are you running? 
You mig= ht want to do an upgrade to the latest version before reducing the vnode co= unts, a lot of fixes and improvement went in lately, it might help you gett= ing your repair faster.

H

= On 5 Aug 2013, at 12:30, Christopher Wirt <chris.wirt@struq.com> wrote:

<= div class=3D"WordSection1" style=3D"page: WordSection1; ">
Hi,
 
I=92m thinking about reducing the number of vnodes per server.
 
We have= 3 DC setup =96 one with 9 nodes, two with 3 nodes each.
 
Each node has 256 = vnodes. We=92ve found that repair operations are beginning to take too long= .
 
What options do I have for achieving this = in a live cluster?
 
 
Thanks,
 
Chris

= --_000_441BC6E1C4BD4FE6A2D8BB88A8E4C424expediacom_--