openjpa-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Wolf <>
Subject Re: What is the inverse of "lazy fetching"?
Date Tue, 08 Jan 2013 18:29:32 GMT
Hello Kevin,

Thanks for the reply.  I am not sure if that will help or not.  So
referring back to my original scenario, I have many-to-one,
with the master object having a collection field to hold references to
child objects, and there could be a huge number of
child objects.   Ar you saying that I could create child objects in
batches, of say, 1000 and either do:
em.getTransaction().commit();  // managed

-= or =-

// JSE, not managed

at the end of each batch?

Will that somehow remove the committed child objects from the collection?

I guess I would need to keep an active reference to the master object
to keep it active so subsequent batches of child objects
can be added/committed.   At this point, I'm not concerned about
keeping all the batches under one transaction - I just need
to get the millions of child records added to the DB without using
huge amounts of memory.  Again, kind of like the inverse of
"lazy fetching", but on the WRITING side.



On Tue, Jan 8, 2013 at 10:48 AM, Kevin Sutter <> wrote:
> EntityManager.clear()
> In a managed, transactional environment, the persistence context is cleared
> at the end of each transaction.  But, in a JSE mode, the persistence
> context is still active at the end of a transasction.  EM.clear() will
> clear the persistence context.  As long as you don't have any other
> references to these entities, they should get cleaned up by the gc.
> Is this what you were looking for?
> Kevin
> On Tue, Jan 8, 2013 at 9:29 AM, Chris Wolf <> wrote:
>> I have a model with, say, a one-to-many relationship, where there may
>> be an enormous number of child records.  I see that there is thorough
>> documentation treatment on the subject of reading such object FROM the
>> database, e.g. fetch=FetchType.LAZY attr and/or @LRS (large result
>> set), etc.  but this seems to only optimize READING.
>> My concern is how can I achieve a similar, converse, optimization when
>> WRITING?  i.e. inserting INTO the database.  For example, as I
>> understand it, once I persist the master object, (enhanced, of
>> course), then the collection field which keeps a set of child records
>> uses transitive persistence to automatically write a child record
>> whenever a child object is added to the collection - but here's the
>> thing; if I'm just doing an initial load of the master object and all
>> it's children, and there could be a million children - I don't want
>> each newly added child to stay around in memory (in the master's
>> collection field) once it has been persisted via transitive
>> persistence, otherwise I'll run out of memory.
>> Is there some mechanism to have child objects be removed from memory
>> once persisted?
>> Thanks,
>> Chris

View raw message