cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Weijun Li <weiju...@gmail.com>
Subject Re: Testing row cache feature in trunk: write should put record in cache
Date Wed, 17 Feb 2010 01:35:59 GMT
Yes it will be nice if you can add a parameter in storage-conf.xml to enable
write-through to row cache. There are many cases that require the new keys
to be immediately available for read. In my case I'm thinking of caching
30-50% of all records in memory to reduce read latency.

Thanks,

-Weijun

On Tue, Feb 16, 2010 at 5:17 PM, Jonathan Ellis <jbellis@gmail.com> wrote:

> On Tue, Feb 16, 2010 at 7:11 PM, Weijun Li <weijunli@gmail.com> wrote:
> > Just started to play with the row cache feature in trunk: it seems to be
> > working fine so far except that for RowsCached parameter you need to
> specify
> > number of rows rather than a percentage (e.g., "20%" doesn't work).
>
> 20% works, but it's 20% of the rows at server startup.  So on a fresh
> start that is zero.
>
> Maybe we should just get rid of the % feature...
>
> > The problem is: when you write to Cassandra it doesn't seem to put the
> new
> > keys in row cache (it is said to update instead invalidate if the entry
> is
> > already in cache). Is it easy to implement this feature?
>
> It's deliberately not done.  For many (most?) workloads you don't want
> fresh writes blowing away your read cache.  The code is in
> Table.apply:
>
>                ColumnFamily cachedRow =
> cfs.getRawCachedRow(mutation.key());
>                if (cachedRow != null)
>                    cachedRow.addAll(columnFamily);
>
> I think it would be okay to have a WriteThrough option for what you're
> asking, though.
>
> -Jonathan
>

Mime
View raw message