incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From amulya rattan <talk2amu...@gmail.com>
Subject Using Cassandra for read operations
Date Thu, 21 Feb 2013 19:03:49 GMT
Dear All,

We are currently evaluating Cassandra for an application involving strict
SLAs(Service level agreements). We just need one column family with a long
key and approximately 70-80 bytes row. We are not concerned about write
performance but are primarily concerned about read. For our SLAs, a read of
max 15-20 rows at once(using multi slice), should not take more than 4 ms.
Till now, on a single node setup, using cassandra' stress tool, the numbers
are promising. But I am guessing that's because there is no network latency
involved there and since we set memtable around 2gb(4 gb heap), we never
had to get to Disk I/O.

Assuming our nodes having >32GB RAM, a couple of questions regarding read:

* To avoid disk I/Os, the best option we thought is to have data in memory.
Is it a good idea to have memtable setup around 1/2 or 3/4 of heap size?
Obviously flushing will take a lot of time but would that hurt that node's
performance big time?

* Cassandra stress tool only gives out average read latency. Is there a way
to figure out max read-latency for a bunch of read operations?

* How big a row cache can one have? Given that cassandra provides off-heap
row caching, in a machine >32 gb RAM, would it be wise to have a >10 gb row
cache with 8 gb java heap? And how big should the corresponding key cache
be then?

Any response is appreciated.

~Amulya

Mime
View raw message