harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pavel Afremov" <pavel.n.afre...@gmail.com>
Subject Re: [drlvm] finalizer design questions
Date Wed, 27 Dec 2006 11:04:26 GMT
Oh. I see. In your proposal threads quantity isn't changed. It's a
difference. I see.

So on  Intel Core 2 Quad, for example, VM will have 4 permanent high
priority finalaser threads.  Is it correct?

BR

On 12/27/06, Weldon Washburn <weldonwjw@gmail.com> wrote:
>
> On 12/26/06, Pavel Afremov <pavel.n.afremov@gmail.com> wrote:
> >
> > Hi.
> >
> > On 12/19/06, in "[drlvm][gcv5] finalizer design" thread I wrote,
> > that  "WBS
> > should increase relative finalization performance by following steps:
> >
> >   - Increase number of finalizer threads while them quantity is less
> >   then processors number.
> >   - Using locks for stopping user threads. I have some ideas how to do
> >   this without deadlock.
> >   - Increase priority of finalizer thread. Or reduce priority of user
> >   threads which generate finalizable objects, because finalization
> > activities
> >   shouldn't stop "good" threads which don't create fianalizabe object."
>
>
> hmm.... I am not sure how a JVM would determine which java app
> thread(s) will generate finalizable objects at specific rates.  In other
> words, how would the JVM anticipate which thread(s) in the future are
> "good"
> and which one(s) are bad??  I vote for a simple design (with less
> deadlocks
> to debug and dodge.)
>
> As I understand "independent" investigation provided by Weldon, which
> > discussed here, reaches the same results. So this design can be
> considered
> > as approved by Harmony community.
>
>
> hmmm.... OK.   Just to confirm we are on the same page.  The design I
> suggest is to have exactly one finalizer thread for each CPU in the box.
> These finalizer threads are to be created during JVM initialization.
> No finalizer threads are added or killed once JVM initialization
> completes. The priority of each of these finalizer threads is set slightly
> above the highest priority of the Java app threads.  The finalizer
> queues have synchronized access.  All threads must grab the queue's lock
> to
> enqueue/dequeue objects.  Is this what you are thinking?
>
> BR
> >
> > Pavel Afremov.
> >
> >
> >
> >
> > On 12/26/06, Gregory Shimansky <gshimansky@gmail.com> wrote:
> > >
> > > 2006/12/26, Weldon Washburn <weldonwjw@gmail.com>:
> > > >
> > > > On 12/25/06, Gregory Shimansky <gshimansky@gmail.com> wrote:
> > > > >
> > > > > Weldon Washburn wrote:
> > > > > > On 12/24/06, Gregory Shimansky <gshimansky@gmail.com>
wrote:
> > > > > >>
> > > > > >> On Sunday 24 December 2006 16:23 Weldon Washburn wrote:
> > > > > [snip]
> > > >
> > > >
> > > >
> > > >
> > > > This is a very good investigation, looks like this is exactly the
> way
> > > > > that production VMs follow with finalizers.
> > > > >
> > > > > One last thing that is unclear to me is if an application creates
> > many
> > > > > objects of a class which have long running (never ending) finalize
> > > > > method, it may cause out of memory condition because objects
> > wouldn't
> > > be
> > > > > removed from heap. Is it true with this approach?
> > > >
> > > >
> > > > I ran some more tests.  I created 10,000 finalizable objects.  Then
> I
> > > > "newed" enough empty arrays to cause the GC to shove the 10,000
> > > > finalizable
> > > > objects into finalization queue.  The finalize() method contains
> > exactly
> > > > the
> > > > same while(true){ busy} workload as before.
> > > >
> > > > As far as I can tell, only the first object on the finalize queue is
> > > ever
> > > > executed. The other 9,999 objects remain in the queue until the
> first
> > > > object's finalizer returns (which it never does.)   Supporting
> > evidence
> > > is
> > > > that when I change the workload so that the finalize() method
> finishes
> > > > after
> > > > 100M loops, the JVM then move onto the next finalizable object in
> the
> > > > queue.
> > > >
> > > > In other words, as far as the JVM is concerned the Java app
> programmer
> > > can
> > > > legitimately create a situation where the JVM is flooded with
> objects
> > > > requiring finalization and these objects chew up all availiable Java
> > > heap.
> > > > And the Java app programmer can legitimately create a situation
> where
> > a
> > > > finalizer never finishes and it hogs 97% of available CPU.  It looks
> > to
> > > me
> > > > like one of those unavoidable situations where the java app will
> crash
> > > and
> > > > burn.  Its much the same situation as intentionally creating an
> > infinite
> > > > link-list of live objects.  Its easy to build an app that does
> > > this.  And,
> > > > yes, the JVM will run out of heap and exit.  Its probably best to
> tell
> > > the
> > > > app programmer, "just don't do this".
> > >
> > >
> > > Thanks for an interesting investigation, it is really valuable. It
> shows
> > > that production VM which you've used does not create any new threads
> for
> > > finalization. I was thinking about some legitimate scenario which
> could
> > > lead
> > > to a jammed finalizer queue and therefore may lead to OOME eventually.
> > But
> > > later I realized that the whole java process would just hang.
> > >
> > > The scenario which I was thinking of is if an application has a class
> > with
> > > a
> > > finalizer that deals with file IO, like closing a file. On Unixes when
> a
> > > file is located on NFS filesystem, and this filesystem is
> disconnected,
> > > then
> > > file IO with such file (usually, depending on NFS mount options) stops
> > the
> > > process. I think it is a whole process, not just one thread (correct
> me
> > if
> > > I
> > > am wrong), so it wouldn't be just finalization queque which would wait
> > for
> > > NFS IO, it would be all VM threads, so OOME would not happen.
> > >
> > > --
> > > Gregory
> > >
> > >
> >
> >
>
>
> --
> Weldon Washburn
> Intel Enterprise Solutions Software Division
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message