openjpa-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christiaan <christiaan...@hotmail.com>
Subject flush(): memory & performance
Date Thu, 13 Sep 2007 09:28:09 GMT

Hi,
I am working on the following scenario: I need to import a large file and
create an object (tree-like) structure. This process is pretty much
streambased, so once I have create parts of the tree I don’t need to access
it anymore. However, if something fails, the complete creation needs to be
rolled back.

I am trying to do this by using flush(). As mentioned in another thread this
results in a big performance hit (created an issue for this).  However, I
also notice that fields of dirty objects are not cleared on flush() so they
cannot be garbage collected (I have set retainValues to false). Since it is
a tree-like structure I also keep a reference to the root node during the
complete process. In such a scenario this easily leads to OutOfMemory
exceptions. When I do intermediate commits, I do get constant performance
and memory usage, since due to the retainValues=false, already processed
nodes transition to hollow. I was under the impression that flush() should
typically be used in this scenario or am I mistaken? Would it be possible if
retainValues is set to false to transition dirty objects to hollow when
flush() is being called, just like commit()? I know the jdo/jpa spec does
not say anything about this situation for flush(), but it would seem like a
logical consequence for me. For this scenario it would solve the memory
problem.

Kind regards,
Christiaan

-- 
View this message in context: http://www.nabble.com/flush%28%29%3A-memory---performance-tf4434606.html#a12651555
Sent from the OpenJPA Users mailing list archive at Nabble.com.


Mime
View raw message