Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 966DB103FC for ; Fri, 12 Apr 2013 07:33:56 +0000 (UTC) Received: (qmail 3831 invoked by uid 500); 12 Apr 2013 07:33:54 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 3689 invoked by uid 500); 12 Apr 2013 07:33:54 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 3666 invoked by uid 99); 12 Apr 2013 07:33:53 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 12 Apr 2013 07:33:53 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_SOFTFAIL,UNPARSEABLE_RELAY X-Spam-Check-By: apache.org Received-SPF: softfail (nike.apache.org: transitioning domain of andras.szerdahelyi@ignitionone.com does not designate 216.82.254.109 as permitted sender) Received: from [216.82.254.109] (HELO mail1.bemta7.messagelabs.com) (216.82.254.109) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 12 Apr 2013 07:33:46 +0000 Received: from [216.82.254.195:37636] by server-13.bemta-7.messagelabs.com id 39/0A-22723-4C8B7615; Fri, 12 Apr 2013 07:33:24 +0000 X-Env-Sender: andras.szerdahelyi@ignitionone.com X-Msg-Ref: server-4.tower-200.messagelabs.com!1365752003!12131885!1 X-Originating-IP: [208.52.173.250] X-StarScan-Received: X-StarScan-Version: 6.8.6.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 14272 invoked from network); 12 Apr 2013 07:33:24 -0000 Received: from mail.dentsunetwork.com (HELO mail.dentsunetwork.com) (208.52.173.250) by server-4.tower-200.messagelabs.com with AES128-SHA encrypted SMTP; 12 Apr 2013 07:33:24 -0000 Received: from ATL02MB02.corp.local ([fe80::7997:c980:b031:df37]) by ATL02HUB02.corp.local ([::1]) with mapi id 14.02.0342.003; Fri, 12 Apr 2013 03:33:24 -0400 From: Andras Szerdahelyi To: "user@cassandra.apache.org" Subject: Re: multiple Datacenter values in PropertyFileSnitch Thread-Topic: multiple Datacenter values in PropertyFileSnitch Thread-Index: Ac42taOFBxGjyWwORnGEhrWbbH66FQAJvRqAAAqBXIAAF116AAAHjlKA Date: Fri, 12 Apr 2013 07:33:23 +0000 Message-ID: In-Reply-To: <7356139A7F43214087089BD91F61B28F01E90E626B@AT1P0WIMXS002.baw.local> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: user-agent: Microsoft-MacOutlook/14.3.2.130206 x-originating-ip: [10.0.90.2] Content-Type: multipart/alternative; boundary="_000_CD8D843B14AD9andrasszerdahelyiignitiononecom_" MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org --_000_CD8D843B14AD9andrasszerdahelyiignitiononecom_ Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable I would replicate your different keyspaces to different DCs and scale those= appropriately So, for example, HighLoad KS replicates to really-huge-dc, which would have= , 10 nodes, LowerLoad KS replicates to smaller-dc with 5 nodes. The idea is , you do not mix your different keyspaces in the same datacente= r ( this is possible with NetworkTopology ) or for redundancy/HA purposes y= ou place a single replica in the other keyspace's DC but you direct your ap= plications to the "primary" DC of the keyspace, with LOCAL_QUORUM or ONE re= ads. Regards, Andras From: Matthias Zeilinger > Reply-To: "user@cassandra.apache.org" > Date: Friday 12 April 2013 07:57 To: "user@cassandra.apache.org" > Subject: RE: multiple Datacenter values in PropertyFileSnitch I=B4m using for each application it=B4s own keyspace. What I want is to split up for different load patterns. So that 2 apps with same and very high load pattern are not clashing. For other load patterns I want to use another splitting. Is there any best practice or should I scale out, so that the complete load= can be distributed to on all nodes? Br, Matthias Zeilinger Production Operation =96 Shared Services P: +43 (0) 50 858-31185 M: +43 (0) 664 85-34459 E: matthias.zeilinger@bwinparty.com bwin.party services (Austria) GmbH Marxergasse 1B A-1030 Vienna www.bwinparty.com From: aaron morton [mailto:aaron@thelastpickle.com] Sent: Donnerstag, 11. April 2013 20:48 To: user@cassandra.apache.org Subject: Re: multiple Datacenter values in PropertyFileSnitch A node can only exist in one DC and one rack. Use different keyspaces as suggested. Cheers ----------------- Aaron Morton Freelance Cassandra Consultant New Zealand @aaronmorton http://www.thelastpickle.com On 12/04/2013, at 1:47 AM, Jabbar Azam > wrote: Hello, I'm not an expert but I don't think you can do what you want. The way to se= parate data for applications on the same cluster is to use different tables= for different applications or use multiple keyspaces, a keyspace per appli= cation. The replication factor you specify for each keyspace specifies how = many copies of the data are stored in each datacenter. You can't specify that data for a particular application is stored on a spe= cific node, unless that node is in its own cluster. I think of a cassandra cluster as a shared resource where all the applicati= ons have access to all the nodes in the cluster. Thanks Jabbar Azam On 11 April 2013 14:13, Matthias Zeilinger > wrote: Hi, I would like to create big cluster for many applications. Within this cluster I would like to separate the data for each application,= which can be easily done via different virtual datacenters and the correct= replication strategy. What I would like to know, if I can specify for 1 node multiple values in t= he PropertyFileSnitch configuration, so that I can use 1 node for more appl= ications? For example: 6 nodes: 3 for App A 3 for App B 4 for App C I want to have such a configuration: Node 1 =96 DC-A& DC-C Node 2 =96 DC-B & DC-C Node 3 =96 DC-A & DC-C Node 4 =96 DC-B & DC-C Node 5 =96 DC-A Node 6 =96 DC-B Is this possible or does anyone have another solution for this? Thx & br matthias --_000_CD8D843B14AD9andrasszerdahelyiignitiononecom_ Content-Type: text/html; charset="Windows-1252" Content-ID: <2DDFF4F94639BD4B982E9FEBAFA73F25@corp.local> Content-Transfer-Encoding: quoted-printable
I would replicate your different keyspaces to different DCs and scale = those appropriately 
So, for example, HighLoad KS replicates to really-huge-dc, which would= have, 10 nodes, LowerLoad KS replicates to smaller-dc with 5 nodes.
The idea is , you do not mix your different keyspaces in the same data= center ( this is possible with NetworkTopology ) or for redundancy/HA purpo= ses you place a single replica in the other keyspace's DC but you direct yo= ur applications to the "primary" DC of the keyspace, with LOCAL_QUORUM or ONE reads.

Regards,
Andras

From: Matthias Zeilinger <Matthias.Zeilinger@bwinparty.= com>
Reply-To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Date: Friday 12 April 2013 07:57 To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Subject: RE: multiple Datacenter va= lues in PropertyFileSnitch

I=B4m using for each application i= t=B4s own keyspace.

What I want is to split up for dif= ferent load patterns.

So that 2 apps with same and very = high load pattern are not clashing.

 

For other load patterns I want to = use another splitting.

 

Is there any best practice or shou= ld I scale out, so that the complete load can be distributed to on all node= s?

 

Br,

Matthias Zeilinger

Production Operation =96 Shared Servi= ces

 

P: +43 (0) 50 858-311= 85

M: +43 (0) 664 85-344= 59

E: matthias.zeilinger@bwin= party.com

 <= /p>

bwin.party services (Austria) GmbH

Marxergasse 1B

A-1030 Vienna

 

www.bwinparty.com

 

From: aaron morton [mailto:aaron@thelastpickle.com]
Sent: Donnerstag, 11. April 2013 20:48
To: user@cassandra.apac= he.org
Subject: Re: multiple Datacenter values in PropertyFileSnitch

 

A node can only exist in one DC and one rack. <= o:p>

 

Use different keyspaces as suggested. 

 

Cheers

 

-----------------

Aaron Morton

Freelance Cassandra Consultant=

New Zealand

 

@aaronmorton

 

On 12/04/2013, at 1:47 AM, Jabbar Azam <ajazam@gmail.com> wrote:



Hello,

I'm not an expert but I don't think you can do what you want. The way to se= parate data for applications on the same cluster is to use different tables= for different applications or use multiple keyspaces, a keyspace per appli= cation. The replication factor you specify for each keyspace specifies how many copies of the data are stored= in each datacenter.

You can't specify tha= t data for a particular application is stored on a specific node, unless th= at node is in its own cluster.

I think of a cassandra cluster as a shared resource = where all the applications have access to all the nodes in the cluster.

 


Thanks

Jabbar Azam

 

On 11 April 2013 14:13, Matthias Zeilinger <Matthias.Z= eilinger@bwinparty.com> wrote:

Hi,

 

I would like to create big cluster for many applications.

Within this cluster I would like to separate the data for each app= lication, which can be easily done via different virtual datacenters and th= e correct replication strategy.

What I would like to know, if I can specify for 1 node multiple va= lues in the PropertyFileSnitch configuration, so that I can use 1 node for = more applications?

For example:

6 nodes:

3 for App A

3 for App B

4 for App C

 

I want to have such a configuration:

Node 1 =96 DC-A& DC-C

Node 2 =96 DC-B & DC-C

Node 3 =96 DC-A & DC-C

Node 4 =96 DC-B & DC-C

Node 5 =96 DC-A

Node 6 =96 DC-B

 

Is this possible or does anyone have another solution for this?

 

 

Thx & br matthias

 

 

--_000_CD8D843B14AD9andrasszerdahelyiignitiononecom_--