xmlgraphics-fop-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andreas L Delmelle <a_l.delme...@pandora.be>
Subject Re: Property Cache: Null Pointer Exception
Date Mon, 12 Nov 2007 18:10:20 GMT
On Nov 12, 2007, at 18:29, Chris Bowditch wrote:

> Andreas L Delmelle wrote:
>
> <snip/>
>
>>> A lot of the code that accesses the ref member variable has  
>>> checks  for null and since it is a WeakReference I assume the  
>>> cause of this  error is the Garbage Collector removing the  
>>> reference. Adding a  check for null in the rehash method seems to  
>>> avoid the error and  all unit tests pass, but I would like  
>>> confirmation from a  Properties expert (Andreas?) that this is a  
>>> valid fix before  committing the change.
>> If your suggestion fixes the issue, by all means, commit. It  
>> should  not hurt to insert a check there.
>> I'll have a closer look at underlying causes one of the coming days.
>> Anyway, thanks for spotting this. The possibility of such issues   
>> arising, was precisely the reason I did not include it in the  
>> 0.94  release.
>
> Thanks for the prompt reply! I've now committed the check for null  
> in the rehash method.

Good! Just checked the code more closely, and I think your suggestion  
is actually the only way to deal with it.
The reason for the NPE seems to be roughly:

- once the size of the hash-chains grows too large, a cleanup-thread  
is launched (CacheCleaner), which nulls out the WeakReference if its  
referent has been GC'ed, so non-synchronized get() attempts  
immediately 'see' that the entry has been removed and get() needs to  
try again with synchronization.
- if the size of the cleaned hash-chain is still too large, then the  
map is rehashed

Now, the rehash() method obtains locks on all hash-chains  
recursively. This means that it is theoretically possible that,  
before rehash() obtains the lock on a given map-segment, another  
cleaner-thread may have already run over it, and removed some  
obsolete entries. Hence why CacheEntry.ref returns null in that case.

As to why this only happens after a few thousand documents: I'd  
assume that the size of the hash-chains in that case has grown to a  
point where it becomes more and more likely that there are requests  
to clean up obsolete references, and those requests can interfere  
with a rehash() that resulted from a previous, non-successful cleanup.


Cheers

Andreas


Mime
View raw message