On Sun, May 1, 2011 at 2:58 PM, shimi <shimi.k@gmail.com> wrote:
On Sun, May 1, 2011 at 9:48 PM, Jake Luciani <jakers@gmail.com> wrote:
If you have N column families you need N * memtable size of RAM to support this.  If that's not an option you can merge them into one as you suggest but then you will have much larger SSTables, slower compactions, etc.
 
I don't necessarily agree with Tyler that the OS cache will be less effective... But I do agree that if the sizes of sstables are too large for you then more hardware is the solution...

If you merge CFs which are hardly accessed with one which are accessed frequently, when you read the SSTable you load data that is hardly accessed to the OS cache.

 Only the rows or portions of rows you read will be loaded into the OS cache.  Just because different rows are in the same file doesn't mean the entire file is loaded into the OS cache.  The bloom filter and index file will be loaded but those are not large files.  

 

Another thing which you should be aware is that if you need to run any of the nodetool cf tasks, and you really need it for a specific CF running it on the specific CF is better and faster. 

Shimi
 


On Sun, May 1, 2011 at 1:24 PM, Tyler Hobbs <tyler@datastax.com> wrote:
When you have a high number of CFs, it's a good idea to consider merging CFs with highly correlated access patterns and similar structure into one. It is *not* a good idea to merge all of your CFs into one (unless they all happen to meet this criteria). Here's why:

Besides big compactions and long repairs that you can't break down into smaller pieces, the main problem here is that your caching will become much less efficient. The OS buffer cache will be less effective because rows from all of the CFs will be interspersed in the SSTables. You will no longer be able to tune the key or row cache to only cache frequently accessed data. Both of these will tend to cause a serious increase in latency for your hot data.

Shouldn't these kinds of problems be solved by Cassandra?

They are mainly solved by Cassandra's general solution to any performance problem: the addition of more nodes. There are tickets open to improve compaction strategies, put bounds on SSTable sizes, etc; for example, https://issues.apache.org/jira/browse/CASSANDRA-1608 , but the addition of more nodes is a reliable solution to problems of this nature.

On Sun, May 1, 2011 at 7:28 AM, David Boxenhorn <david@taotown.com> wrote:
Shouldn't these kinds of problems be solved by Cassandra? Isn't there a maximum SSTable size?

On Sun, May 1, 2011 at 3:24 PM, shimi <shimi.k@gmail.com> wrote:
Big sstables, long compactions, in major compaction you will need to have free disk space in the size of all the sstables (which you should have anyway).

Shimi


On Sun, May 1, 2011 at 2:03 PM, David Boxenhorn <david@taotown.com> wrote:
I'm having problems administering my cluster because I have too many CFs (~40).

I'm thinking of combining them all into one big CF. I would prefix the current CF name to the keys, repeat the CF name in a column, and index the column (so I can loop over all rows, which I have to do sometimes, for some CFs).

Can anyone think of any disadvantages to this approach?






--
Tyler Hobbs
Software Engineer, DataStax
Maintainer of the pycassa Cassandra Python client library




--
http://twitter.com/tjake




--
http://twitter.com/tjake