incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peter Schuller <peter.schul...@infidyne.com>
Subject Re: flush_largest_memtables_at messages in 7.4
Date Thu, 14 Apr 2011 07:10:09 GMT
> Actually when I run 2 stress clients in parallel I see Read Latency stay the
> same. I wonder if cassandra is reporting accurate nos.

Or you're just bottlenecking on something else. Are you running the
extra stress client on different machines for example, so that the
client isn't just saturating?

> I understand your analogy but for some reason I don't see that happening
> with the results I am seeing with multiple stress clients running. So I am
> just confused where the real bottleneck is.

If your queue size to your device is consistently high (you were
mentioning numbers in the ~100 range), you're saturating on disk,
periods. Unless your disk is a 500 drive RAID volume and 100 requests
represents 1/5 of capacity... (If you have a raid volume with a few
disks or an ssd, you want to switch to the noop or deadline scheduler
btw.)

-- 
/ Peter Schuller

Mime
View raw message