cayenne-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hans Pikkemaat <>
Subject Re: Object Caching
Date Thu, 12 Nov 2009 08:43:53 GMT

Yes, the paginated query would indeed be the only way for me to go forward.
The problem however is that I get the exception I posted earlier.



Andrus Adamchik wrote:
> For paginated queries we contemplated a strategy of a list with  
> constant size of fully resolved objects. I.e. when a page is swapped  
> in, some other (LRU?) page is swapped out. We decided against it, as  
> in a general case it is hard to consistently predict which page should  
> be swapped out.
> However it should be rather easy to write such a list for a specific  
> case with a known access order (e.g. a standard iteration order). In  
> fact I would vote to even include such implementation in Cayenne going  
> forward.
> More specifically, you can extend IncrementalFaultList [1], overriding  
> 'resolveInterval' to swap out previously read pages, turning them back  
> into ids. And the good part is that you can use your extension  
> directly without any need to modify the rest of Cayenne.
> Andrus
> [1]
> On Nov 12, 2009, at 10:07 AM, Hans Pikkemaat wrote:
>> Hi,
>> So this means that if I use a generic query that the query results  
>> are always stored
>> completely in the object store (or the query cache if I configure it).
>> Objects are returned in a list so as long I have a reference to this  
>> list (because I'm
>> traversing it) these objects are not garbage collected.
>> If I use the query cache the full query results are cached. This  
>> means that I can only
>> tell it to remove the whole query.
>> Effectively this means I'm unable to run a big query and process the  
>> results as a stream.
>> So I cannot process the first results and then somehow make them  
>> available for
>> garbage collection.
>> The only option I have would be the iterated query but this is only  
>> usefull for queries
>> one 1 table without any relations because it is not possible to use  
>> prefetching nor is
>> it possible to manually construct relations between obects.
>> My conclusion here is that cayenne is simply not suitable for doing  
>> large batch wise
>> query processing because of the memory implications.
>> tx
>> HPI
>> Andrus Adamchik wrote:
>>> As mentioned in the docs, individual objects and query lists are
>>> cached independently. Of course query lists contain a subset of  
>>> cached
>>> object store objects inside the lists. An object won't get gc'd if it
>>> is also stored in the query list.
>>> Now list cache expiration is controlled via query cache factory. By
>>> default this is an LRU map, so as long as the map has enough space to
>>> hold lists (its capacity == # of lists, not # of objects), the  
>>> objects
>>> won't get gc'd.
>>> You can explicitly remove entries from the cache via QueryCache  
>>> remove
>>> and removeGroup methods. Or you can use a different QueryCacheFactory
>>> that implements some custom expiration/cleanup mechanism.
>>> Andrus
>>> On Nov 11, 2009, at 3:43 PM, Hans Pikkemaat wrote:
>>>> Hi,
>>>> I use the latest version of cayenne, 3.0b and am experimenting with
>>>> the object caching features.
>>>> The documentation states that committed objects are purged from the
>>>> cache because it uses weak references.
>>>> (
>>>> If I however run a query using SQLTemplate which caches the objects
>>>> into the dataContext local cache (objectstore),
>>>> the objects don't seem to be purged at all. If I simply run the
>>>> query dump the contents using an iterator on the resulting
>>>> List then the nr of registered objects in the objectstore stays the
>>>> same (dataContext.getObjectStore().registeredObjectsCount()).
>>>> Even if I manually run System.gc() I don't see any changes (I know
>>>> this can be normal as gc() doesn't guarantee anything)
>>>> What am I doing wrong? Under which circumstances will cayenne purge
>>>> the cache?
>>>> tx
>>>> Hans

View raw message