cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "R. Verlangen" <>
Subject Re: Wide row column slicing - row size shard limit
Date Thu, 16 Feb 2012 08:36:02 GMT
Things you should know:

- Thrift has a limit on the amount of data it will accept / send, you can
configure this in Cassandra: 64MB's should still work find (1)
- Rows should not become huge: this will make "perfect" load balancing
impossible in your cluster
- A single row should fit on a disk
- The limit of columns per row is 2 billion

You should pick a range for your time range (e.g. second, minute, ..) that
suits your needs.

As far as I'm aware of, there's no such limit as 10MB in Cassandra for a
single row to decrease performance. Might be a memory / IO problem.

2012/2/15 Data Craftsman <>

> Hello experts,
> Based on this blog of Basic Time Series with Cassandra data modeling,
> "This (wide row column slicing) works well enough for a while, but over
> time, this row will get very large. If you are storing sensor data that
> updates hundreds of times per second, that row will quickly become gigantic
> and unusable. The answer to that is to shard the data up in some way"
> There is a limit on how big the row size can be before slowing down the
> update and query performance, that is 10MB or less.
> Is this still true in Cassandra latest version? or in what release
> Cassandra will remove this limit?
> Manually sharding the wide row will increase the application complexity,
> it would be better if Cassandra can handle it transparently.
> Thanks,
> Charlie | DBA & Developer
> p.s. Quora link,

View raw message