harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gregory Shimansky <gshiman...@gmail.com>
Subject Re: [drlvm] finalizer design questions
Date Mon, 25 Dec 2006 23:12:20 GMT
Weldon Washburn wrote:
> On 12/24/06, Gregory Shimansky <gshimansky@gmail.com> wrote:
>>
>> On Sunday 24 December 2006 16:23 Weldon Washburn wrote:
>> > Very interesting.  Please tell me if the following is correct.  Without
>> > WBS, finalizing objects falls further and further behind because
>> > finalization thread(s) are unable to grab enough of the CPU(s) to keep
>> up.
>> > Instead of increasing the priority of the finalization thread(s), WBS
>> takes
>> > the approach of increasing the number of finalization threads.  The net
>> > effect is to increase the rate of finalization by diluting the OS ready
>> > queue.
>> >
>> > Does the following alternative design make sense?  Assume the OS/VM
>> porting
>> > layer allows the VM to change an OS thread's priority appropriately.
>> > During VM boot, query the OS to determine the number of CPUs in the box
>> and
>> > create one finalizer thread for each CPU.  Never create additional
>> > finalizer threads.  Boost the priority of the finalizer threads above
>> the
>> > level of Java app threads (but probably below real time priority.)  
>> Note
>> > that all of this is orthogonal to "native" vs. "java" finalizer 
>> threads.
>>
>>
>> I like this approach. It probably covers all mentioned problems except 
>> for
>> the
>> fundamental finalizers problem of long running (never ending) finalize()
> 
> 
> To answer this question, I build a simple single thread finalizer test.  It
> causes the JVM to call a finalize() method that does a never ending CPU
> intensive task.  Every 10 million iterations, this method prints out an
> iteration count.  While finalize() is running, main() executes a second 
> copy
> of exactly the same workload.   main()'s print statement is slightly
> different so that its easy to sort out the comingled output.
> 
> I ran the above workload on a product JVM on a single CPU laptop and
> observed the following.  The print statements of main() and finalize() are
> indeed comingled.  This suggests that the JVM runs finalize() in a seperate
> thread.  The test ran for a rough approximation of eternity (I killed the
> test after 30 minutes).  This suggests that the JVM is never supposed to
> kill a long running finalizer.  In other words, the finalizer is 
> supposed to
> run as long as it wants.  This utility of building an app that contains a
> finalizer that consumes vast quantities of CPU time is an entirely 
> different
> issue from JVM architecture.
> 
> There were about 40 finalize() print statements to one main() print
> statement.  In other words, the finalize() method consumed about 97% of the
> CPU and main() got the remaining 3%.  Given the workloads are identical, 
> one
> explanation for the above is that the JVM is setting its finalizer thread
> priority higher than normal Java threads.  I don't know the current
> windowsXP thread sched algorithm but my guess that is that it has some
> notion of "nice" which allows lower priority threads to run once every 
> 30 or
> 40 thread sched ticks.
> 
> If DRLVM were ever ported to an OS that did not have a "nice" thread sched
> algorithm, we could simply move the finalizer thread priority back to
> normal.
> 
> In other words,  I don't yet see evidence that my original proposal is
> incorrect.  Sorry for the long explanation.

This is a very good investigation, looks like this is exactly the way 
that production VMs follow with finalizers.

One last thing that is unclear to me is if an application creates many 
objects of a class which have long running (never ending) finalize 
method, it may cause out of memory condition because objects wouldn't be 
removed from heap. Is it true with this approach?

If you run your test with many finalizable objects which cannot be 
finalized quickly, can you see the number of application threads in task 
manager increasing?

-- 
Gregory


Mime
View raw message