I΄m using for each application it΄s own keyspace.
What I want is to split up for different load patterns.
So that 2 apps with same and very high load pattern are not clashing.
For other load patterns I want to use another splitting.
Is there any best practice or should I scale out, so that the complete load can be distributed to on all nodes?
Production Operation Shared Services
P: +43 (0) 50 858-31185
M: +43 (0) 664 85-34459
bwin.party services (Austria) GmbH
A node can only exist in one DC and one rack.
Use different keyspaces as suggested.
On 12/04/2013, at 1:47 AM, Jabbar Azam <firstname.lastname@example.org> wrote:
I'm not an expert but I don't think you can do what you want. The way to separate data for applications on the same cluster is to use different tables for different applications or use multiple keyspaces, a keyspace per application. The replication factor you specify for each keyspace specifies how many copies of the data are stored in each datacenter.
You can't specify that data for a particular application is stored on a specific node, unless that node is in its own cluster.
I think of a cassandra cluster as a shared resource where all the applications have access to all the nodes in the cluster.
On 11 April 2013 14:13, Matthias Zeilinger <Matthias.Zeilinger@bwinparty.com> wrote:
I would like to create big cluster for many applications.
Within this cluster I would like to separate the data for each application, which can be easily done via different virtual datacenters and the correct replication strategy.
What I would like to know, if I can specify for 1 node multiple values in the PropertyFileSnitch configuration, so that I can use 1 node for more applications?
3 for App A
3 for App B
4 for App C
I want to have such a configuration:
Node 1 DC-A& DC-C
Node 2 DC-B & DC-C
Node 3 DC-A & DC-C
Node 4 DC-B & DC-C
Node 5 DC-A
Node 6 DC-B
Is this possible or does anyone have another solution for this?
Thx & br matthias