cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Prescod <pres...@gmail.com>
Subject Re: Worst case #iops to read a row
Date Tue, 13 Apr 2010 18:31:22 GMT
I am just checking math, not model.

On Tue, Apr 13, 2010 at 10:48 AM, Time Less <timelessness@gmail.com> wrote:

>
> numRowsOnNode = 10B / 20 = 500M.

50 million

> replicationFactor = 3.
> rowsPerSStable = 128MB / 1K = 131k.
>
> Therefore worst-case iops per read on this cluster are:
> (500M * 3 / 131k) * 3 = 150M / 131k = 11,450.

This line isn't internally consistent. Where did 150M come from? 500 M
* 9 = 4.5 Billion.

My calculation for the whole thing is 3433.

I am not claiming to be a Cassandra expert and therefore cannot vouch
for the model at all.

 Paul Prescod

Mime
View raw message