commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Maurizio Cucchiara <>
Subject Re: [ognl] internal cache performance improvement
Date Mon, 06 Jun 2011 16:13:55 GMT
Gary hit the nail on the head, considering that OGNL usually runs in a
multi-thread environment like struts, concurrency is a must.
At first glance LRUMap would seem the appropriate choice (it was
thought for this purpose), unfortunately LRUMap is not thread safe,
surely we can wrap using Collections#synchronizedMap, but it will
always a bottlenecks.

On the other hand ConcurrentHashMap seems the appropriate choice
(Currently the synchronized keyword has 18 match inside the
OgnlRuntime class).

We could eventually consider to allow the user to decide which
implementation to choose.

Since I have many complex struts application in production, I can do a
little test.

On 6 June 2011 16:55, Gary Gregory <> wrote:
> Does concurrency need to be taken into account for the cache? If so, you
> need to consider how access to the cache will be synchronized. An intrinsic
> lock? A ConcurrentHashMap? and so on.
> Gary
> On Mon, Jun 6, 2011 at 2:36 AM, Simone Tripodi <>wrote:
>> Hi all OGNL folks,
>> my today's topic is about internal cache, that can be IMHO improved in
>> therms of performance; its implementation is a multi-value map alike,
>> based on a fixed-size array, a function is applied to each key to
>> calculate the array index, each array element is a Collection of
>> element.
>> Even if getting the list of element related to a general key 'k' has
>> complexity of O(1), which is fine, insert/search operations are not
>> the best because their complexity is O(m) where m is the size of list
>> related to the key.
>> Follow below my proposal: there's no need to reinvent the wheel, so
>> the array implementation can be replaced with the already provided
>> HashMap, where each map value is a simple implementation of balanced
>> binary heap (AFAIK commons-collections already provides an
>> implementation), that allows us reducing insert/search complexity to
>> O(log m).
>> WDYT? Is is a worth or trivial added value? I know that maybe cache
>> dimension is relatively small, but linear search sounds too basic,
>> isn't it?
>> Looking forward to your feedbacks, have a nice day,
>> Simo
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail:
>> For additional commands, e-mail:
> --
> Thank you,
> Gary

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message