Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4A7BD9BF1 for ; Fri, 1 Jun 2012 01:53:15 +0000 (UTC) Received: (qmail 31641 invoked by uid 500); 1 Jun 2012 01:53:13 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 31612 invoked by uid 500); 1 Jun 2012 01:53:13 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 31603 invoked by uid 99); 1 Jun 2012 01:53:13 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 01 Jun 2012 01:53:13 +0000 X-ASF-Spam-Status: No, hits=3.3 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_PASS,TRACKER_ID X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [208.113.200.5] (HELO homiemail-a42.g.dreamhost.com) (208.113.200.5) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 01 Jun 2012 01:53:05 +0000 Received: from homiemail-a42.g.dreamhost.com (localhost [127.0.0.1]) by homiemail-a42.g.dreamhost.com (Postfix) with ESMTP id 0EC4768C05D for ; Thu, 31 May 2012 18:52:44 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=thelastpickle.com; h=from :mime-version:content-type:subject:date:in-reply-to:to :references:message-id; q=dns; s=thelastpickle.com; b=zkskawGA5z kaWLxPTjwxyugdoNMZflkPNz5ZTtIsNE5kfOn2nSmUfz73JlpWKj1HLpb/GrsG7X lT4H2FtIZ1ms35kIcM9UXb6PtoirCZaZQgCacxhyn8CLAXfP1mldWK+veo0EH/or +9VLgsaART6ff2wz2xL2HHTI8UXXUERt8= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=thelastpickle.com; h=from :mime-version:content-type:subject:date:in-reply-to:to :references:message-id; s=thelastpickle.com; bh=qQ6Fk1+9pjwGwZJy gveFt0La18Y=; b=1EhPyN5oPCssF8YuDzqPXGBrVqEn+PtB45zm0Cp9yyGRIhiw UBcY2xhbSpeWstSOxXnzDh86aaDVuAssXr925BA8w/z0rKdy2oebg3bIE0v61wcE nvEApMJk0PJu1JfVoBtLGfWO+Wr71RjfWG1zrFyvtTZ1ngXWO+8IoPGCAHs= Received: from [192.168.2.189] (unknown [116.90.132.105]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: aaron@thelastpickle.com) by homiemail-a42.g.dreamhost.com (Postfix) with ESMTPSA id 6C7F968C058 for ; Thu, 31 May 2012 18:52:43 -0700 (PDT) From: aaron morton Mime-Version: 1.0 (Apple Message framework v1278) Content-Type: multipart/alternative; boundary="Apple-Mail=_6BA9BAE0-6305-4B14-B463-BB4F25380AA5" Subject: Re: tokens and RF for multiple phases of deployment Date: Fri, 1 Jun 2012 13:52:40 +1200 In-Reply-To: To: user@cassandra.apache.org References: <88A585A2-F5BF-4A6B-9DF3-E5450A52763F@thelastpickle.com> Message-Id: <73238C02-231E-408A-BC4A-D9BB4AED0496@thelastpickle.com> X-Mailer: Apple Mail (2.1278) --Apple-Mail=_6BA9BAE0-6305-4B14-B463-BB4F25380AA5 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=iso-8859-1 > The ring (2 in DC1, 1 in DC2) looks OK, but the load on the new node = in DC2 is almost 0%. yeah, thats the way it will look.=20 > But all the other rows are not in the new node. Do I need to copy the = data files from a node in DC1 to the new node? How did you add the node ? (see = http://www.datastax.com/docs/1.0/operations/cluster_management#adding-node= s-to-a-cluster) if in doubt run nodetool repair on the new node.=20 Cheers ----------------- Aaron Morton Freelance Developer @aaronmorton http://www.thelastpickle.com On 1/06/2012, at 3:46 AM, Chong Zhang wrote: > Thanks Aaron. >=20 > I might use LOCAL_QUORUM to avoid the waiting on the ack from DC2. >=20 > Another question, after I setup a new node with token +1 in a new DC, = and updated a CF with RF {DC1:2, DC2:1}. When i update a column on one = node in DC1, it's also updated in the new node in DC2. But all the other = rows are not in the new node. Do I need to copy the data files from a = node in DC1 to the new node? >=20 > The ring (2 in DC1, 1 in DC2) looks OK, but the load on the new node = in DC2 is almost 0%. >=20 > Address DC Rack Status State Load = Owns Token =20 > = 85070591730234615865843651857942052864 =20 > 10.10.10.1 DC1 RAC1 Up Normal 313.99 MB = 50.00% 0 =20 > 10.10.10.3 DC2 RAC1 Up Normal 7.07 MB = 0.00% 1 =20 > 10.10.10.2 DC1 RAC1 Up Normal 288.91 MB = 50.00% 85070591730234615865843651857942052864 =20 >=20 > Thanks, > Chong >=20 > On Thu, May 31, 2012 at 5:48 AM, aaron morton = wrote: >=20 >> Could you provide some guide on how to assign the tokens in this = growing deployment phases?=20 >=20 > background = http://www.datastax.com/docs/1.0/install/cluster_init#calculating-tokens-f= or-a-multi-data-center-cluster >=20 > Start with tokens for a 4 node cluster. Add the next 4 between between = each of the ranges. Add 8 in the new DC to have the same tokens as the = first DC +1 >=20 >> Also if we use the same RF (3) in both DC, and use EACH_QUORUM for = write and LOCAL_QUORUM for read, can the read also reach to the 2nd = cluster? > No. It will fail if there are not enough nodes available in the first = DC.=20 >=20 >> We'd like to keep both write and read on the same cluster. > Writes go to all replicas. Using EACH_QUORUM means the client in the = first DC will be waiting for the quorum from the second DC to ack the = write.=20 >=20 >=20 > Cheers > ----------------- > Aaron Morton > Freelance Developer > @aaronmorton > http://www.thelastpickle.com >=20 > On 31/05/2012, at 3:20 AM, Chong Zhang wrote: >=20 >> Hi all, >>=20 >> We are planning to deploy a small cluster with 4 nodes in one DC = first, and will expend that to 8 nodes, then add another DC with 8 nodes = for fail over (not active-active), so all the traffic will go to the 1st = cluster, and switch to 2nd cluster if the whole 1st cluster is down or = on maintenance.=20 >>=20 >> Could you provide some guide on how to assign the tokens in this = growing deployment phases? I looked at some docs but not very clear on = how to assign tokens on the fail-over case. >> Also if we use the same RF (3) in both DC, and use EACH_QUORUM for = write and LOCAL_QUORUM for read, can the read also reach to the 2nd = cluster? We'd like to keep both write and read on the same cluster. >>=20 >> Thanks in advance, >> Chong >=20 >=20 --Apple-Mail=_6BA9BAE0-6305-4B14-B463-BB4F25380AA5 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=iso-8859-1
The ring (2 in DC1, 1 in DC2) looks OK, = but the load on the new node in DC2 is almost = 0%.
yeah, thats the way it will = look. 

But = all the other rows are not in the new node. Do I need to copy the data = files from a node in DC1 to the new = node?

if = in doubt run nodetool repair on the new = node. 

Cheers


http://www.thelastpickle.com

On 1/06/2012, at 3:46 AM, Chong Zhang wrote:

Thanks = Aaron.

I might use LOCAL_QUORUM to avoid the waiting = on the ack from DC2.

Another question, after I = setup a new node with token +1 in a new DC,  and updated a CF with = RF {DC1:2, DC2:1}. When i update a column on one node in DC1, it's also = updated in the new node in DC2. But all the other rows are not in the = new node. Do I need to copy the data files from a node in DC1 to the new = node?

The ring (2 in DC1, 1 in DC2) looks OK, but the load = on the new node in DC2 is almost = 0%.

Address         DC =          Rack        Status = State   Load            Owns   =  Token                 =                     =   
                  =                     =                     =                     =  85070591730234615865843651857942052864     =  
10.10.10.1    DC1         = RAC1        Up     Normal  313.99 MB =       50.00%  0           =                     =             
10.10.10.3    DC2         RAC1   =      Up     Normal  7.07 MB     =       0.00%   1           =                     =             
10.10.10.2 =    DC1         RAC1       =  Up     Normal  288.91 MB       = 50.00%  85070591730234615865843651857942052864     =  

Thanks,
Chong

On Thu, May 31, 2012 at 5:48 AM, aaron morton = <aaron@thelastpickle.com> wrote:

Could you provide some guide on how to assign the = tokens in this growing deployment = phases? 

background http://www.datastax.com/docs/1.0/install/cluster_init#ca= lculating-tokens-for-a-multi-data-center-cluster

Start with tokens for a 4 node cluster. Add the next = 4 between between each of the ranges. Add 8 in the new DC to have the = same tokens as the first DC +1

Also if we use the same RF (3) in both DC, and use EACH_QUORUM for write and LOCAL_QUORUM for read, can the read also reach to the 2nd = cluster?
No. It will fail if there are not enough nodes = available in the first = DC. 

We'd like to keep both write and read on the same = cluster.
Writes go to all replicas. Using EACH_QUORUM means = the client in the first DC will be waiting for the quorum from the = second DC to ack the = write. 


Cheers
-----------------
Aaron Morton
Freelance = Developer
@aaronmorton

On 31/05/2012, at 3:20 AM, Chong Zhang = wrote:

Hi all,

We = are planning to deploy a small cluster with 4 nodes in one DC first, and = will expend that to 8 nodes, then add another DC with 8 nodes for fail = over (not active-active), so all the traffic will go to the 1st cluster, = and switch to 2nd cluster if the whole 1st cluster is down or = on maintenance. 

Could you provide some guide on how to assign the = tokens in this growing deployment phases? I looked at some docs but not = very clear on how to assign tokens on the fail-over case.
Also = if we use the same RF (3) in both DC, and use EACH_QUORUM for write and LOCAL_QUORUM for read, can the read also reach to the 2nd = cluster? We'd like to keep both write and read on the same = cluster.

Thanks in advance,
Chong
=


= --Apple-Mail=_6BA9BAE0-6305-4B14-B463-BB4F25380AA5--