I'm doing get_slice on huge rows (3 million columns) and even though I am doing it iteratively I keep getting TimeoutExceptions. I've tried to change the number of columns fetched but it did not work.
I have a 5 machine cluster, each with 4GB of which 3 are dedicated to cassandra's heap, but still the all consume all of the memory and get huge IO wait due to the amout of reads.
I am running tests with 100 clients all performing multiple operations mostly get_slice, get and multi_get, but the timeouts only occur in the get_slice.
Does this have anything to do with cassandra's ability (or lack thereof) to keep the rows in memory? Or am I doing anything wrong? Any tips?