Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8AE7E10AF5 for ; Wed, 13 Nov 2013 14:29:09 +0000 (UTC) Received: (qmail 47969 invoked by uid 500); 13 Nov 2013 14:29:03 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 47144 invoked by uid 500); 13 Nov 2013 14:28:57 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 47090 invoked by uid 99); 13 Nov 2013 14:28:54 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Nov 2013 14:28:54 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy includes SPF record at spf.trusted-forwarder.org) Received: from [74.125.83.54] (HELO mail-ee0-f54.google.com) (74.125.83.54) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Nov 2013 14:28:49 +0000 Received: by mail-ee0-f54.google.com with SMTP id e51so15808eek.27 for ; Wed, 13 Nov 2013 06:28:27 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=Sz/11ITDQUnhNnM7HCO+j4yKYgG6eIUcBbgDHrLTuGE=; b=KH4Mbq8ToN8MaXue0kSpvM2KeBLnJZiDPBcuaw4QCtRRhV3Dj6FsCGA/V5qNyj8ArT sgx7o5fmhgEMY4QYrluyKEtBBex4dhixJmmzkAhpYCHelJnL2OPD3q5frL7hYWuVnjBv CyEXWwv9y+/SUez38rtnipZ4Vux/8TbUQei68Hye+IByNxf4hu8O8JRauDjNTV5gQ858 HBW1dJQpPjgw4uZFChoaxTsuYJkqO97rN+Yo43mrcWBOf7fJ523GEDRn/HS2+1OFFRmk kcsb2SThxvl27MGXVhwLBc92VwHiZijLgs2Wp1vFtgfBJoErTVJgof8KYUeFEd2wXCq2 4NRA== X-Gm-Message-State: ALoCoQn5+hpXhCQ7aPXP2SzF6rKiOnv6MTguY0XK5+jn6zaayJ3k3kcQrOqWQtIZ1HjpYHBLY2rk MIME-Version: 1.0 X-Received: by 10.14.109.1 with SMTP id r1mr18416467eeg.32.1384352907680; Wed, 13 Nov 2013 06:28:27 -0800 (PST) Received: by 10.223.105.16 with HTTP; Wed, 13 Nov 2013 06:28:27 -0800 (PST) X-Originating-IP: [70.112.126.233] In-Reply-To: References: Date: Wed, 13 Nov 2013 08:28:27 -0600 Message-ID: Subject: Re: Modeling multi-tenanted Cassandra schema From: Nate McCall To: Cassandra Users Content-Type: multipart/alternative; boundary=001a11c29a164e97ad04eb0fc7db X-Virus-Checked: Checked by ClamAV on apache.org --001a11c29a164e97ad04eb0fc7db Content-Type: text/plain; charset=ISO-8859-1 You basically want option (c). Option (d) might work, but you would be bending the paradigm a bit, IMO. Certainly do not use dedicated column families or keyspaces per tennant. That never works. The list history will show that with a few google searches and we've seen it fail badly with several clients. Overall, option (c) would be difficult to do in CQL without some very well thought out abstractions and/or a deep hack on the Java driver (not in-ellegant or impossible, just lots of moving parts to get your head around if you are new to such). That said, depending on the size of your project and skill of your team, this direction might be worth considering. Usergrid (just accepted for incubation at Apache) functions this way via the Thrift API: https://github.com/apigee/usergrid-stack The commercial version of Usergrid has "tens of thousands" of active tennants on a single cluster (same code base at the service layer as the open source version). It uses Hector's built in virtual keyspaces: https://github.com/hector-client/hector/wiki/Virtual-Keyspaces (NOTE: though Hector is sunsetting/in patch maintenance, the approach is certainly legitimate - but I'd recommend you *not* start a new project on Hector). In short, Usergrid is the only project I know of that has a well-proven tenant model that functions at scale, though I'm sure there are others around, just not open sourced or actually running large deployments. Astyanax can do this as well albeit with a little more work required: https://github.com/Netflix/astyanax/wiki/Composite-columns#how-to-use-the-prefixedserializer-but-you-really-should-use-composite-columns Happy to clarify any of the above. On Tue, Nov 12, 2013 at 3:19 AM, Ben Hood <0x6e6562@gmail.com> wrote: > Hi, > > I've just received a requirement to make a Cassandra app > multi-tenanted, where we'll have up to 100 tenants. > > Most of the tables are timestamped wide row tables with a natural > application key for the partitioning key and a timestamp key as a > cluster key. > > So I was considering the options: > > (a) Add a tenant column to each table and stick a secondary index on > that column; > (b) Add a tenant column to each table and maintain index tables that > use the tenant id as a partitioning key; > (c) Decompose the partitioning key of each table and add the tenant > and the leading component of the key; > (d) Add the tenant as a separate clustering key; > (e) Replicate the schema in separate tenant specific key spaces; > (f) Something I may have missed; > > Option (a) seems the easiest, but I'm wary of just adding secondary > indexes without thinking about it. > > Option (b) seems to have the least impact of the layout of the > storage, but a cost of maintaining each index table, both code wise > and in terms of performance. > > Option (c) seems quite straight forward, but I feel it might have a > significant effect on the distribution of the rows, if the cardinality > of the tenants is low. > > Option (d) seems simple enough, but it would mean that you couldn't > query for a range of tenants without supplying a range of natural > application keys, through which you would need to iterate (under the > assumption that you don't use an ordered partitioner). > > Option (e) appears relatively straight forward, but it does mean that > the application CQL client needs to maintain separate cluster > connections for each tenant. Also I'm not sure to what extent key > spaces were designed to partition identically structured data. > > Does anybody have any experience with running a multi-tenanted > Cassandra app, or does this just depend too much on the specifics of > the application? > > Cheers, > > Ben > -- ----------------- Nate McCall Austin, TX @zznate Co-Founder & Sr. Technical Consultant Apache Cassandra Consulting http://www.thelastpickle.com --001a11c29a164e97ad04eb0fc7db Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
You basically want option (c). Option (d) might work,= but you would be bending the paradigm a bit, IMO. Certainly do not use ded= icated column families or keyspaces per tennant. That never works. The list= history will show that with a few google searches and we've seen it fa= il badly with several clients.
=A0
Overall, option (c) would be difficult to do in CQL without s= ome very well thought out abstractions and/or a deep hack on the Java drive= r (not in-ellegant or impossible, just lots of moving parts to get your hea= d around if you are new to such). That said, depending on the size of your = project and skill of your team, this direction might be worth considering.= =A0

Usergrid (just accepted for incubation at Apache) functions = this way via the Thrift API:=A0https://github.com/apigee/usergrid-stack

<= div> The commercial version of Usergrid has "tens of thousands" of act= ive tennants on a single cluster (same code base at the service layer as th= e open source version). It uses Hector's built in virtual keyspaces: ht= tps://github.com/hector-client/hector/wiki/Virtual-Keyspaces (NOTE: tho= ugh Hector is sunsetting/in patch maintenance, the approach is certainly le= gitimate - but I'd recommend you *not* start a new project on Hector).= =A0

In short, Usergrid is the only project I know of that h= as a well-proven tenant model that functions at scale, though I'm sure = there are others around, just not open sourced or actually running large de= ployments.=A0

Astyanax can do this as well albeit with a little more = work required:=A0

Happy to clarify any of the above.=A0


On Tue, Nov 12, 201= 3 at 3:19 AM, Ben Hood <0x6e6562@gmail.com> wrote:
Hi,

I've just received a requirement to make a Cassandra app
multi-tenanted, where we'll have up to 100 tenants.

Most of the tables are timestamped wide row tables with a natural
application key for the partitioning key and a timestamp key as a
cluster key.

So I was considering the options:

(a) Add a tenant column to each table and stick a secondary index on
that column;
(b) Add a tenant column to each table and maintain index tables that
use the tenant id as a partitioning key;
(c) Decompose the partitioning key of each table and add the tenant
and the leading component of the key;
(d) Add the tenant as a separate clustering key;
(e) Replicate the schema in separate tenant specific key spaces;
(f) Something I may have missed;

Option (a) seems the easiest, but I'm wary of just adding secondary
indexes without thinking about it.

Option (b) seems to have the least impact of the layout of the
storage, but a cost of maintaining each index table, both code wise
and in terms of performance.

Option (c) seems quite straight forward, but I feel it might have a
significant effect on the distribution of the rows, if the cardinality
of the tenants is low.

Option (d) seems simple enough, but it would mean that you couldn't
query for a range of tenants without supplying a range of natural
application keys, through which you would need to iterate (under the
assumption that you don't use an ordered partitioner).

Option (e) appears relatively straight forward, but it does mean that
the application CQL client needs to maintain separate cluster
connections for each tenant. Also I'm not sure to what extent key
spaces were designed to partition identically structured data.

Does anybody have any experience with running a multi-tenanted
Cassandra app, or does this just depend too much on the specifics of
the application?

Cheers,

Ben



--
-----------------
Nate McCall
Austin, TX
@zznate

Co-Fo= under & Sr. Technical Consultant
Apache Cassandra Consulting
http://www.thelastpi= ckle.com
--001a11c29a164e97ad04eb0fc7db--