incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Noble Paul നോബിള്‍ नोब्ळ् <noble.p...@gmail.com>
Subject Re: Limited row cache size
Date Mon, 25 Jun 2012 05:10:02 GMT
I was using the datastax build. Do they also have a 1.1 build?

On Mon, Jun 18, 2012 at 9:05 AM, aaron morton <aaron@thelastpickle.com> wrote:
> cassandra 1.1.1 ships with concurrentlinkedhashmap-lru-1.3.jar
>
> row_cache_size_in_mb starts life as an int but the byte size is stored as a
> long
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/CacheService.java#L143
>
> Cheers
>
>
> -----------------
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 15/06/2012, at 7:13 PM, Noble Paul നോബിള്‍ नोब्ळ् wrote:
>
> hi,
> I configured my server with a row_cache_size_in_mb : 1920
>
> When  started the server and checked  the JMX it shows the capacity is
> set to 1024MB .
>
> I investigated further and found that the version of
> concurrentlruhashmap used is 1.2 which sets capacity max value to 1GB.
>
> So, in cassandra 1.1 the max cache size I can use is 1GB
>
>
> Digging deeper , I realized that throughout the API chain the cache
> size is passed around as an int so even if I write my own
> CacheProvider the max size would be Integer.MAX_VALUE = 2GB
>
> unless cassandra changes the version of concurrentlruhashmap to 1.3
> and change the signature to use a long for size, we can't have a big
> cache. according to me 1 GB is a really small size.
>
> So , even if I have bigger machines I can't really use them
>
>
>
> --
> -----------------------------------------------------
> Noble Paul
>
>



-- 
-----------------------------------------------------
Noble Paul

Mime
View raw message