Return-Path: Delivered-To: apmail-jakarta-avalon-dev-archive@apache.org Received: (qmail 34392 invoked from network); 7 Jan 2002 19:17:14 -0000 Received: from unknown (HELO nagoya.betaversion.org) (192.18.49.131) by daedalus.apache.org with SMTP; 7 Jan 2002 19:17:14 -0000 Received: (qmail 15130 invoked by uid 97); 7 Jan 2002 19:17:17 -0000 Delivered-To: qmlist-jakarta-archive-avalon-dev@jakarta.apache.org Received: (qmail 15114 invoked by uid 97); 7 Jan 2002 19:17:16 -0000 Mailing-List: contact avalon-dev-help@jakarta.apache.org; run by ezmlm Precedence: bulk List-Unsubscribe: List-Subscribe: List-Help: List-Post: List-Id: "Avalon Developers List" Reply-To: "Avalon Developers List" Delivered-To: mailing list avalon-dev@jakarta.apache.org Received: (qmail 15103 invoked from network); 7 Jan 2002 19:17:16 -0000 Message-ID: <3C39F4A0.6070104@users.sf.net> Date: Mon, 07 Jan 2002 21:18:56 +0200 From: Antti Koivunen User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:0.9.7) Gecko/20011221 X-Accept-Language: en, fi, sv, fr, de, ja MIME-Version: 1.0 To: avalon-dev@jakarta.apache.org Subject: Cache Performance Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Rating: daedalus.apache.org 1.6.2 0/1000/N X-Spam-Rating: daedalus.apache.org 1.6.2 0/1000/N As caching is used to improve application performance, the cache operations themselves should be as fast as possible. While I like the modular design of the proposed Excalibur cache, there are a few issues that can potentially limit the performance. 1. The Cache interface is a bit too heavy Most components use cache just as a 'place to put get things from', so they have no need for cache listeners. Some might also want to decide for themselves if an element is to be considered valid, without having to encapsulate this functionality in CacheValidators. Would it make sense to limit Cache to provide just the necessary methods, and extend this with a ValidatingCache to support event listeners and pluggable CacheValidators? Actually, it would be even possible to define a ValidatingCacheProxy(Cache cache, CacheValidator validator) that would provide this functionality to any cache implementation, without sacrificing performance when it's not needed. 2. Cache should probably not extend ThreadSafe While 97% of all applications need a thread-safe cache, this is not always the case. For example, in performance sensitive event-driven architectures, there might be a single worker thread using the cache that really doesn't need the additional cost of acquiring and releasing object locks. Naturally, it would be possible to define a synch proxy to turn any unsynchronized Cache into a thread-safe one. 3. Cache + CacheStore + ReplacementPolicy I really like the good separation of concerns. However, this does have some performace implications as these components could be tightly integrated. This is of course just a minor implementation detail (not required by the Cache interface). BTW, probably the best way to implement LRUPolicy, would be to use a custom linked list backed by a HashMap to hold the entries for quick lookups. 4. Good work, that's all :) All things considered, very nice OOD. Thanks for reading my first post on this list. Hopefully it wasn't a complete waste of your time. (: Anrie ;) -- To unsubscribe, e-mail: For additional commands, e-mail: