incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peter Schuller <peter.schul...@infidyne.com>
Subject Re: Read Latency Degradation
Date Sat, 18 Dec 2010 19:36:22 GMT
> You are absolutely back to my main concern. Initially we were consistently
> seeing < 10ms read latency and now we see 25ms (30GB sstable file), 50ms
> (100GB sstable file) and 65ms (330GB table file) read times for a single
> read with nothing else going on in the cluster. Concurrency is not our
> problem/concern (at this point), our problem is slow reads in total
> isolation. Frankly the concern is that a 2TB node with a 1TB sstable (worst
> case scenario) will result in > 100ms read latency in total isolation.

So if you have a single non-concurrent client, along, submitting these
reads that take 65 ms - are you disk bound (according to the last
column of iostat -x 1), and how many reads per second (rps column) are
you seeing relative to client reads? Is the number of disk reads per
client read consistent with the actual number of sstables at the time?

The behavior you're describing really does seem indicative of a
problem, unless the the bottleneck is legitimately reads from disk
from multiple sstables resulting from rows being spread over said
sstables.

-- 
/ Peter Schuller

Mime
View raw message