Yes it will be nice if you can add a parameter in storage-conf.xml to enable write-through to row cache. There are many cases that require the new keys to be immediately available for read. In my case I'm thinking of caching 30-50% of all records in memory to reduce read latency.
On Tue, Feb 16, 2010 at 7:11 PM, Weijun Li <email@example.com> wrote:20% works, but it's 20% of the rows at server startup. So on a fresh
> Just started to play with the row cache feature in trunk: it seems to be
> working fine so far except that for RowsCached parameter you need to specify
> number of rows rather than a percentage (e.g., "20%" doesn't work).
start that is zero.
Maybe we should just get rid of the % feature...
It's deliberately not done. For many (most?) workloads you don't want
> The problem is: when you write to Cassandra it doesn't seem to put the new
> keys in row cache (it is said to update instead invalidate if the entry is
> already in cache). Is it easy to implement this feature?
fresh writes blowing away your read cache. The code is in
ColumnFamily cachedRow = cfs.getRawCachedRow(mutation.key());
if (cachedRow != null)
I think it would be okay to have a WriteThrough option for what you're