cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Ellis <>
Subject Re: why read operation use so much of memory?
Date Mon, 19 Apr 2010 20:41:10 GMT
(Moving to users@ list.)

Like any Java server, Cassandra will use as much memory in its heap as
you allow it to.  You can request a GC from jconsole to see what its
approximate "real" working set it. explains why reads
are slower than writes.  You can tune this by using the key cache, row
cache, or by using range queries instead of requesting rows one at a

contrib/py_stress is a better starting place for a benchmark than
rolling your own, btw.  we see about 8000 reads/s with that on a
4-core server.

On Sun, Apr 18, 2010 at 8:40 PM, Bingbing Liu <> wrote:
> Hi,all
> I have a cluster of 5 nodes, each node has a 4 cores cpu and 8 G Memory.
> I use the 0.6-beta3 cassandra for testting.
> First , i insert 6,000,000 rows each of which is 1k bytes, the speed of write is so excited.
> But then ,when i read them each row at a time from two clients at the same time ,one
of the client is very slow and use so long a time,
> i find that on each node the process of Cassandra occupy 7 G memory or so (use the "top"
command), that puzzled me.
> Why read operation use so much of memory? May be i missed something?
> Thx.
> 2010-04-18
> Bingbing Liu

View raw message