james-server-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Wellington <bwell...@xbill.org>
Subject Re: Caching in DNSJAVA
Date Mon, 02 Oct 2006 19:23:47 GMT
On Sun, 1 Oct 2006, Noel J. Bergman wrote:

> Stefano wrote:
>> Noel J. Bergman wrote:
>>> I lowered maxcachesize to 5000 from the default of 50000
>>> If nothing else, the smaller cache should help eliminate
>>> that memory consumption from consideration.
>
>> Unfortunately I think that in current DNSServer implementation we
>> can only tune the "IN" cache and not SOA, PTR and other caches. So
>> they will be 50000 anyway.
>
>> I've not checked the code for this, but I'm almost sure I remember
>> this is how it works.
>
> dnsjava is consistently the largest user of memory in my JAMES heap, and
> does not appear to be entirely bounded, although I am having surprising
> difficulty keeping it running when heap dumps are enabled.
>
> Can you comment on the caching behavior inside of dnsjava?  We use dnsjava
> via:
>
> http://svn.apache.org/repos/asf/james/server/branches/v2.3/src/java/org/apac
> he/james/dnsserver/DNSServer.java
>
> As you can see, we setup:
>
>   cache = new Cache (DClass.IN);
>   cache.setMaxEntries(maxCacheSize);
>   Lookup.setDefaultCache(cache, DClass.IN);
>
> On a relatively slow day, my server processes 100K+ connections, of which
> anywhere from 45%-70% might be blocked by a DNSRBL.

The caching algorithm in dnsjava should be fairly simple.  Calling 
setMaxEntries() sets the number of DNS nodes (names) in the cache; all 
information about individual records with the same name is stored in one 
node.  The data structure is a LinkedHashMap with LRU semantics, to ensure 
that only a certain number of elements are retained.

I can't think of any reason why the number of elements wouldn't be 
properly bounded; the use of the LinkedHashMap is trivial and copied from 
its documentation.  There's a Cache.getSize() method, which should tell 
you how many nodes are in use.  It calls the LinkedHashMap size function 
directly, so should be accurate

If you can find a way to make the size exceed the maximum size, that 
would be interesting, but if the problem is that maxCacheSize nodes takes 
a lot of memory, that's not really fixable without changing the way nodes 
are stored.

Brian

---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscribe@james.apache.org
For additional commands, e-mail: server-dev-help@james.apache.org


Mime
View raw message