cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peter Schuller <peter.schul...@infidyne.com>
Subject Re: Cassandra benchmarking on Rackspace Cloud
Date Tue, 20 Jul 2010 13:41:59 GMT
> But what's then the point with adding nodes into the ring? Disk speed!

Well, it may also be cheaper to service an RPC request than service a
full read or write, even in terms of CPU.

But: Even taking into account that requests are distributed randomly,
the cluster should still scale. You will approach the overhead of
taking the overhead of a level of RPC indirection for 100% of
requests, but it won't become worse than that. That overhead is still
going to be distributed across the entire cluster and you should still
be seeing throughput increasing as nodes are added.

That said, given that the test in this case is probably the cheapest
possible test to make, even in terms of CPU, by hitting non-existent
values, maybe the RPC overhead is simply big enough relative to this
type of request that moving from 1 to 4 nodes doesn't show an
improvement. Suppose for example that the cost of forwarding an RPC
request is comparabale to servicing a read request for a non-existent
key. Under those conditions, going from 1 to 2 nodes would not be
expected to affect throughput at all. Going from 2 to 3 should start
to see an improvement, etc. If RPC overhead is higher than servicing
the read, you'd see performance drop from 1 to 2 nodes (but should
still eventually start scaling with node count).

What seems inconsistent with this hypothesis is that in the numbers
reported by David, there is an initial drop in performance going from
1 to 2 nodes, and then it seems to flatten completely rather than
changing as more nodes are added. Other than at the point of
equilibrium between additional RPC overhead and additional capacity,
I'd expect to either see an increase or a decrease in performance with
each added node.

Additionally, in the absolute beginning of this thread, before the
move to testing non-existent keys, they were hitting the performance
'roof' even with "real" read traffic. Presuming such "real" read
traffic is more expensive to process than key misses on an empty
cluster, that is even more inconsistent with the hypothesis.

(I'm hoping to have time to run my test on EC2 tonight; will see.)

-- 
/ Peter Schuller

Mime
View raw message