harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Geir Magnusson Jr." <g...@pobox.com>
Subject Re: [drlvm] finalizer design questions
Date Tue, 26 Dec 2006 23:58:31 GMT

On Dec 25, 2006, at 5:45 PM, Weldon Washburn wrote:

> On 12/24/06, Gregory Shimansky <gshimansky@gmail.com> wrote:
>>
>> On Sunday 24 December 2006 16:23 Weldon Washburn wrote:
>> > Very interesting.  Please tell me if the following is correct.   
>> Without
>> > WBS, finalizing objects falls further and further behind because
>> > finalization thread(s) are unable to grab enough of the CPU(s)  
>> to keep
>> up.
>> > Instead of increasing the priority of the finalization thread 
>> (s), WBS
>> takes
>> > the approach of increasing the number of finalization threads.   
>> The net
>> > effect is to increase the rate of finalization by diluting the  
>> OS ready
>> > queue.
>> >
>> > Does the following alternative design make sense?  Assume the OS/VM
>> porting
>> > layer allows the VM to change an OS thread's priority  
>> appropriately.
>> > During VM boot, query the OS to determine the number of CPUs in  
>> the box
>> and
>> > create one finalizer thread for each CPU.  Never create additional
>> > finalizer threads.  Boost the priority of the finalizer threads  
>> above
>> the
>> > level of Java app threads (but probably below real time  
>> priority.)  Note
>> > that all of this is orthogonal to "native" vs. "java" finalizer  
>> threads.
>>
>>
>> I like this approach. It probably covers all mentioned problems  
>> except for
>> the
>> fundamental finalizers problem of long running (never ending)  
>> finalize()
>
>
> To answer this question, I build a simple single thread finalizer  
> test.  It
> causes the JVM to call a finalize() method that does a never ending  
> CPU
> intensive task.  Every 10 million iterations, this method prints  
> out an
> iteration count.  While finalize() is running, main() executes a  
> second copy
> of exactly the same workload.   main()'s print statement is slightly
> different so that its easy to sort out the comingled output.
>
> I ran the above workload on a product JVM on a single CPU laptop and
> observed the following.

This is cool.  Did you run the test on Sun, BEA and IBM?  I'd be  
interested to see how they different VMs behave.

geir


Mime
View raw message