cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adrian Cockcroft <>
Subject Re: Cassandra as in-memory cache
Date Sun, 11 Sep 2011 21:42:45 GMT
You should be using the off heap row cache option. That way you avoid GC
overhead and the rows are stored in a compact serialized form that means you
get more cache entries in RAM. Trade off is slightly more CPU for
deserialization etc.


On Sunday, September 11, 2011, aaron morton <> wrote:
> If the row cache is enabled the read path will not use the sstables.
Depending on the workload I would then look at setting *low* memtable flush
settings to use as much memory as possible for the row cache. If the row is
in the row cache the read path will not look at SSTables.
> Then set the row cache save settings per CF to ensure the cache is warmed
when the node starts.
> The write path will still use the WAL so if you may want to disable the
commit log using the durable_writes setting on  the keyspace.
> Hope that helps.
> -----------------
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> On 10/09/2011, at 4:38 AM, kapil nayar wrote:
>> Hi,
>> Can we configure some column-families (or keyspaces) in Cassandra to
perform as a pure in-memory cache?
>> The feature should let the memtables always be in-memory (never flushed
to the disk - sstables).
>> The memtable flush threshold settings of time/ memory/ operations can be
set to a max value to achieve this.
>> However, it seems uneven distribution of the keys across the nodes in the
cluster could lead to java error no-memory available. In order to prevent
this error can we overflow some entries to the disk?
>> Thanks,
>> Kapil

View raw message