It's not about columns, it's about rows, see example statement.
CQL will read everything into List to make latter a count.From 1.0 onwards count paginated reading the columns. What version are you on ?CheersOn 26/09/2012, at 8:26 PM, ÷¦ÔÁÌ¦Ê ôÉÍÞÉÛÉÎ <firstname.lastname@example.org> wrote:Actually an easy way to put cassandra down isselect count(*) from A limit 10000000CQL will read everything into List to make latter a count.2012/9/26 aaron morton <email@example.com>
Can you provide some information on the queries and the size of the data they traversed ?The default maximum size for a single thrift message is 16MB, was it larger than that ? https://github.com/apache/cassandra/blob/trunk/conf/cassandra.yaml#L375CheersOn 25/09/2012, at 8:33 AM, Bryce Godfrey <Bryce.Godfrey@azaleos.com> wrote:Is there anything I can do on the configuration side to prevent nodes from going OOM due to queries that will read large amounts of data and exceed the heap available?For the past few days of we had some nodes consistently freezing/crashing with OOM. We got a heap dump into MAT and figured out the nodes were dying due to some queries for a few extremely large data sets. Tracked it back to an app that just didn’t prevent users from doing these large queries, but it seems like Cassandra could be smart enough to guard against this type of thing?Basically some kind of setting like “if the data to satisfy query > available heap then throw an error to the caller and about query”. I would much rather return errors to clients then crash a node, as the error is easier to track down that way and resolve.Thanks.--