jakarta-taglibs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daryl Beattie" <dar...@date.com>
Subject RE: Memory Leak in ELEvaluator.java (standard v1.0.6)
Date Wed, 20 Oct 2004 20:01:16 GMT
Yes, that's a good suggestion. I have never thought of that.

I also agree that the options have to be weighed carefully when
discussing caching strategies --heck, even testing them is difficult and
a lot of work.

By the way, with regards to testing/profiling whatever fix we use; I'd
gladly volunteer to help. My app may not be a perfect example of
"normal" usage, due to its requirements for low GC pauses, but it does
create many dynamic expressions that can be used to measure cache
performance.

My main concern with the init is that it will be non-obvious to most
users who expect to just drop in a jar and have it "work". I was very
surprised when I found that it was causing my memory to skyrocket, and
then was again surprised when I found out how processor-intensive it was
to bypass the cache. After all, I'm just using tags normally, right? It
is my personal feeling that if a somewhat complicated caching mechanism
is required to make the taglib work well for almost all users
out-of-the-box, then it is better to go with the complicated solution.
Then again, I might have a very different opinion from everybody else.
Also, there's the fact that this is legacy code, and people might not
want to take the simplest caching route due to the lower risk.

On another note, I have some interesting statistics for you guys. I have
been doing some load testing on the standard-taglibs ELEvaluator cache
in order to take some statistics on its usage before I choose a max size
for my quick implementation of a LinkedHashMap cache. What I found
completely surprised me (moreso than surprises I mentioned earlier). My
test has about 220 users performing searches that have tags to help
display results; some of the expressions given to those tags are
dynamically created, most are not. What I found was that the cache size
kept increasing significantly, even though I half-expected it to level
off. This is likely due to the fact that expressions are being
dynamically created with different data each time (probably because a
new user is being used for each search). What surprised me even more was
that the cache hit rate was at 99% throughout the life of the cache.
This could mean several things, ranging in a spectrum from "each item
added is again requested 99 times and then never requested again" to
"one item is being requested 99 times for each new item placed in the
cache". Any of these scenarios seemed very unlikely, to the point where
I thought I had accidentally reversed the percentage value of my cache
hit-rate calculation [which is; hits / (hits+misses)]. Unfortunately
that wasn't the case, and on the hunch that the hit rate was 99% because
many expressions were being evaluated many times repeatedly, I re-ran
the test this time counting the cache calls... What I found was that:

	After only 10 minutes of moderate load (~ 75 users; not heavy
load), I had:

- ~ 7500 entries in the String cache,
- A consistent hit-rate of 99% on the String cache, and
- Over 1 MILLION calls to the String cache (i.e. the
parseExpressionString(String) method).

	After half an hour, the numbers were staggering. I have included
the graphs for your viewing pleasure. The high hit-rate would explain
why my processor choked when I removed the cache altogether. The high
number of entries over short periods of time would explain why my app
was leaking.
	Intereting numbers....

- Daryl.


-----Original Message-----
From: Kris Schneider [mailto:kris@dotech.com] 
Sent: Wednesday, October 20, 2004 2:52 PM
To: Tag Libraries Developers List
Subject: RE: Memory Leak in ELEvaluator.java (standard v1.0.6)


Of course it helps. By the way, you wouldn't have to *only* use soft
references. I dimly recall implementing a cache that combined the use of
strong references for the entries that *had* to be in the cache
(according to some strategy) and using soft references for other
entries. I suppose you could also define an upper bound on the number of
additional "soft" cahce entries. Sort of a two-level cahce. That
approach may not meet your needs either, but I just wanted to point out
the possibility.

When we start talking about having a more full-featured cache
("self-balancing"), we're moving beyond the scope of just grabbing a
class from Collections and renaming it. That's not necessarily bad, we
just have to weigh the options.

I'm not sure that using a servlet context init param is magical or
non-obvious, although it still might be ;-). It's the same mechanism
that can be used to perform "static" configuration of certain JSTL
features (dynamic configuration would be the domain of the
javax.servlet.jsp.jstl.core.Config class). However, I agree we need to
try and pick a default that's generally useful, but as you say, that's
pretty difficult.

Quoting Daryl Beattie <darylb@date.com>:

> I thought about a SoftReference Map... But I personally would like to 
> keep a handle on the size of cache instead of having it eat up all my 
> memory and then forcing a garbage collection to remove soft 
> references. My app, for example, has to be FAST, which means pause 
> times for GC activity have to be very low. I use the low-pause 
> collector for that reason. But when I tested with SoftReference Map 
> caches, I found that all my memory was gone and then a huge full GC 
> took place that paused the VM for a considerable length of time. Since

> then I've not used SoftReference Maps because I'm scared of the memory

> usage.
> 
> On the other hand, finding the right default value for the max cache 
> size is not easy. I'm noticing that a exprCacheSize of 100 is way too 
> small; just one user on my system produces 400 expressions in no time.

> If the cache were limited to 100 in size by default, my processor 
> would still be maxed out.
> 
> Naturally the size of the cache would vary from application to 
> application... Would it be possible to have a self-balancing cache? 
> One that would grow to a size where the hit-rate would be steady at 
> about 70%? Then again, the hit-rate would depend upon the variety of 
> expressions being evaluated, which again depends upon how expressions 
> are used in the application.
> 
> WeakHashMaps won't help us, since the expression Strings are 
> created/destroyed by every tag.... As for more solution ideas, I don't

> have any right now. I'll give it some more thought.
> 
> Also, do we want to worry about the fact that users of the standard 
> taglib have to set some magical (i.e. non-obvious) parameter that will

> make the difference between their system being slow due to 
> re-evaluating expressions or taking up huge amounts of memory? They 
> might not expect such a thing... It would be nicer if it was somewhat 
> self-managing, or at least set to a default value that works for 90% 
> of the users. (Maybe I am not in that 90%...)
> 
> I don't know if this info helps... You did ask for my "thoughts". :)
> 
> - Daryl.
> 
> 
> -----Original Message-----
> From: Kris Schneider [mailto:kris@dotech.com]
> Sent: Wednesday, October 20, 2004 1:56 PM
> To: Tag Libraries Developers List
> Subject: Re: Memory Leak in ELEvaluator.java (standard v1.0.6)
> 
> 
> Heh, my reading comprehension skills are failing. JSP 1.2 requires 
> J2SE 1.2 and JSP 2.0 requires J2SE 1.3.
> 
> Just to clarify Option #2, we're talking about taking the source code 
> for org.apache.commons.collections.map.LRUMap, changing its package, 
> and resolving all its dependencies by doing the same thing with the 
> source code for those classes and interfaces, right? If so, that seems

> reasonable. It would be nice to be able to script that somehow...
> 
> In addition to the use of LRUMap, did anyone want to offer an opinion 
> on exposing a mechanism to configure the cache size (setting cache 
> size = 0 would effectively bypass caching)?  Personally, I think it's 
> a good idea. In keeping with the mechanics of the Config class, how 
> about defining a servlet context init param? Something like:
> 
> <context-param>
>  
> <param-name>org.apache.taglibs.standard.lang.jstl.exprCacheSize</param
> -n
> ame>
>   <param-value>100</param-value>
> </context-param>
> 
> Any thoughts on using different caching strategies? How about 
> leveraging reference objects (SoftReference might be appropriate)?
> 
> Quoting Justyna Horwat <Justyna.Horwat@Sun.COM>:
> 
> > After sending my original mail yesterday I had looked at the JSP 2.0

> > requirements and you're right, they don't require J2SE 1.4.
> > 
> > Out of the options I like Option #2 the best as well for the same 
> > reasons as Daryl and Felipe mentioned. I looked at the Collections 
> > source and they list the JDK dependency as JDK 1.2 or later.
> > 
> > If support of the Collections classes becomes an issue we can 
> > revisit this decision and always decide to add the jar dependency in

> > the
> future.
> > 
> > Unless there are any objections, I'm going to go ahead with Option 
> > #2 and add the Collections LRUMap classes and dependencies into JSTL

> > both
> 
> > 1.0 and 1.1.
> > 
> > Thanks,
> > 
> > Justyna
> > 
> > Felipe Leme wrote:
> > 
> > > On Wed, 2004-10-20 at 01:04, Kris Schneider wrote:
> > > 
> > >>I think I jumped to the conclusion that Daryl was using JSTL 1.1 
> > >>and hence made the JSP 2.0 -> J2EE 1.4 -> J2SE 1.4 connections.
> > > 
> > > 
> > > And even JSP 2.0 doesn't require J2SE 1.4, when running inside a
> > > 'standalone' web-container (i.e, outside a J2EE 1.4 container).
> > > 
> > > 
> > >>required to support J2SE 1.3. I'm not sure I like taking on the 
> > >>dependency, but the Collections project already contains the 
> > >>classes
> 
> > > 
> > > 
> > > I thought about the Collections too, but then Standard would be
> > > compound of 3 jars, which would certainly cause a lot of trouble,
as
> 
> > > people is used to only copying jstl.jar and standard.jar. We have
> > > alternatives,
> > > too:
> > > 
> > > - merge commons-collection.jar into standard.jar
> > > - replicate the necessary classes into Standard src code
> > > 
> > > 
> > > 
> > >>org.apache.commons.collections.LRUMap (v.2.1.1) and 
> > >>org.apache.commons.collections.map.LRUMap (v.3.1). I can't seem to
> put a
> > >>finger on the J2SE requirements for Collections though...
> > > 
> > > 
> > > 
> > > Assuming these classes doesn't have deep dependencies on others, I
> > > would say the second option would be better (in the worst case, we

> > > would do some minor changes in the classes, like removing calls to

> > > Commons Logging, if any).
> > > 
> > > -- Felipe
> 
> --
> Kris Schneider <mailto:kris@dotech.com>
> D.O.Tech       <http://www.dotech.com/>

-- 
Kris Schneider <mailto:kris@dotech.com>
D.O.Tech       <http://www.dotech.com/>

---------------------------------------------------------------------
To unsubscribe, e-mail: taglibs-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: taglibs-dev-help@jakarta.apache.org


Mime
View raw message