cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Malone <>
Subject Re: Reading thousands of columns
Date Wed, 14 Apr 2010 17:31:16 GMT
On Wed, Apr 14, 2010 at 7:45 AM, Jonathan Ellis <> wrote:

> 35-50ms for how many rows of 1000 columns each?
> get_range_slices does not use the row cache, for the same reason that
> oracle doesn't cache tuples from sequential scans -- blowing away
> 1000s of rows worth of recently used rows queried by key, for a swath
> of rows from the scan, is the wrong call more often than it is the
> right one.

Couldn't you cache a list of keys that were returned for the key range, then
cache individual rows separately or not at all?

By "blowing away rows queried by key" I'm guessing you mean "pushing them
out of the LRU cache," not explicitly blowing them away? Either way I'm not
entirely convinced. In my experience I've had pretty good success caching
items that were pulled out via more complicated join / range type queries.
If your system is doing lots of range quereis, and not a lot of lookups by
key, you'd obviously see a performance win from caching the range queries.
Maybe range scan caching could be turned on separately?


View raw message