cayenne-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrus Adamchik <>
Subject Re: OutOfMemoryError: reading a large number of objects one by one
Date Tue, 15 May 2007 12:30:07 GMT

On May 15, 2007, at 12:47 AM, Tomi N/A wrote:

> Reduced the max number of objects to 1000. The result? A NPE at:
> 		for (MyClassC mcc :
> (List<MyClassC>)mca.getToMyClassC().getToParentClass 
> ().getMyClassCArray())
> {

Ok, so the cache size will have to be big enough to hold all resolved  
objects within the lifetime of a context. So let's try another  
strategy. Return the max objects back to 10000 and uncheck "use  
shared cache" for the DataDomain.

If this doesn't work, I suggest to run the app in profiler to see  
exactly how objects are allocated and collected.

> The database referential integrity ensures there can be no nulls if
> (mcc != null), which it is.
> As far as -Xmx is concerned, it's at it's default value (64M), which
> should be several times more than necessary for the job.

Agreed - the default 64m should be enough if there's no leaks.


View raw message