struts-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shannon, Andrew" <>
Subject RE: Struts 2 perfromance among the worst???
Date Wed, 23 Jul 2008 13:59:01 GMT
Performance testing is part art and part science.  While this study
appears to have some had some credible effort put into it, I have seen
how this kind of a study can get people really worked up.  Yet for me
there is still some vital information lacking from it that tells me the
study is incomplete and thus for me not conclusive.

We went through an extensive performance evaluation at my last company
when using Tapestry, EJBs and JDX (a commercial ORM framework akin to
Hibernate).  People unequivocally challenged the frameworks involved,
especially since load was a real issue.

In the end we found that the frameworks themselves were not the problem,
but rather how people were using them.  Inevitably it came down to how
developers were accessing the database and the lifecycle of how pages
were developed around database driven stuff.

We successfully used Grinder to generate load numbers that showed us
enough information on latency to make us take additional steps.  This
study seems to have only taken the first step and generate response time
numbers and present this as conclusive evidence.  It in my opinion is
not sufficient for final conclusions.

The study has not provided any information on which tiers or components
of the applications caused the latency.  The only way to do this is to
perform additional profiling into the database (although they're using
in-memory data this could still be a factor) and the application.  Who's
to say that the applications used for this study were "well written"? So
we cannot assume that they're well written or not, and that in and of
itself presents a problem for accepting the results as is.

Its very difficult to write "similar" applications using different
frameworks and expect to be able to compare them with any accuracy.
This is an important point to remember because even their "common
extensible design" should be suspected as an x-factor in the overall
results when they first acknowledge that apples to apples is difficult,
but then use this common application to mitigate this difficulty.  The
study's benchmark disclaimers make some of these points so the author is
aware of these things, but a manager, for example, who gets his hands on
this study may not understand such things when looking at the bottom

I also didn't see anything with regards to priming the application
before launching the load test.  As you know there are several things a
freshly started application needs to do before it runs without the
overhead of such things as jsp compilation, loading memory caches, etc.
This factor in and of itself can greatly skew results.

Nevertheless, there is value to be had from this study, although I would
be more satisfied if the study had taken the next step to do some
application profiling to further demonstrate they why of their findings.

Just as an example of what we had to do to take the next step beyond a
study like this based on our Grinder results, we used the free version
of the Mercury J2EE Profiler to do application profiling. It was used on
a developer's workstation to profile up to 5 virtual users and slice up
the various tiers of the application via cut points.  This strategy was
right on the money.  We immediately saw that 90+ percent of the latency
was occuring at our data layer because the app was crushing the
database.  Database profiling supported this evidence. Even though we
had poorly written some of the web tier the framework still rocked.

The bosses concluded before we achieved these results that more hardware
was the solution.  In the end we were able to prove that writing more
efficient code, adding some database indexes, and streamlining our use
of the database made the polished product run better on a single server
than throwing more hardware at it in a clustered environment.

These frameworks are written by people far smarter than myself and I'd
like to think that they've done diligence to account for performance of
the framework, and my experience shows me that these frameworks are
typically performant to the point where a framework is never my first
suspicion when load becomes an issue.

Overall I actually liked some of what I saw about the Struts 2 numbers
in this study.  It doesn't appear as bad as the study ranks it so I'm
not concerned based on these results alone.  There's still more the
author should include in Future Work session, like for example do work
to try and tune the application code.  I think they would learn a great
deal about the framework's performance by working through these
exercises and looking at Amdial's Law.

We're just ramping up to load test a struts2-spring-hibernate
application.  I'm very curious to see how it goes.


-----Original Message-----
From: Musachy Barroso [] 
Sent: Tuesday, July 22, 2008 11:39 PM
To: Struts Users Mailing List
Subject: Re: Struts 2 perfromance among the worst???

I am sorry, but I will have to quote a movie on this one:

Kevin Lomax: In the Bible you lose. We're destined to lose dad.
John Milton: Well consider the source son.

The Devil's Advocate ;)

On Tue, Jul 22, 2008 at 9:36 PM, neerashish <>
> any response from struts2 people?
> --
> View this message in context:
> Sent from the Struts - User mailing list archive at
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

"Hey you! Would you help me to carry the stone?" Pink Floyd

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message