db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Daniel John Debrunner <...@apache.org>
Subject Re: Nulling out variables for GC
Date Fri, 23 Feb 2007 15:46:33 GMT
Knut Anders Hatlen wrote:
> djd@apache.org writes:
> 
>>  		if ((pageData == null) || (pageData.length != pageSize)) 
>>          {
>> +            // Give a chance for gc to release the old buffer
>> +            pageData = null; 
>>  			pageData = new byte[pageSize];
> 
> Out of curiosity (I have seen similar code changes go in before), why
> does pageData need to be set to null to be garbage collected? Is this
> a workaround for a bug on a certain JVM? If so, it would be good to
> document it in a comment.

So the idea is to allow the old value of pageData garbage collected 
before the allocation of the new array, rather than after.
Here's the thinking ...

Say on entry pageData is a reference to an 8k array, but the code needs 
a 16k array. With pageData = new byte[16384] I believe the semantics of 
Java require the non-atomic ordering be:

      allocate 16k array
      set the field pageData to the newly allocated buffer.

Thus that order requires that the code at some point has a reference to 
both arrays and thus the 8k array cannot be garbage collected until 
after the field is set. I believe this to be the case because if the new 
byte[16384] throws OutOfMemoryError then pageData must remain set to the 
old value (8k array).

So (in an extreme case) if the vm only had 10k of free memory the 
allocation would fail, but if pageData is nulled out before the new then 
the free memory can jump to 18k and the allocation succeed.

So maybe this is incorrect thinking?
Do the JVM's have some special optimizations to not require this?

Dan.




Mime
View raw message