jmeter-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Badeau, Kevin (DPH)" <>
Subject RE: Variances between automated and manual tests
Date Fri, 26 Jun 2009 13:25:22 GMT
I appreciate the responses to this.

I wanted to close the loop so here is a summary of what I found.

After closer evaluation of the results it appears we had some folks inaccurately interpreting
the data.

Our test is to simulate 500 concurrently active users (i.e. actually doing something) and
we're looking for an approx. 5 second response to each of their requests.

Our tester initiated the test from a single desktop simulating 500 users.

All 500 users are managing to log in within the first minute or so but it is taking about
a 1/2 hour for the system to complete what each of the 500 users are assigned to do. Given
the number of steps we want each virtual user to perform, we should be expecting the test
to complete within a few (2-3) minutes for all 500 users.

Our tester (a developer who is wearing an unfamiliar hat of QA person) took the 30 minute
total test time and divided it by 500 virtual users to get a result of ~3.6 seconds per user.
He was treating it as a sequential batch type of process.

I was under the impression his numbers were based on the average of actual measured response
times to each step of the test.

So in reality, worst case is we have a virtual user who took a 1/2 hour to complete what we
hope the system would have allowed them to complete in a couple of minutes or ideally less.

So it seems the significantly poor performance/unresponsiveness we see from a manual perspective
during an automated 500 concurrent user test is the same poor performance most of our virtual
users are getting also.

So that explains that...

We did consider that our jMeter machine might be a bottleneck in terms of simulating 500 users
in an instantaneous manner (i.e. no rampup).

To determine to what degree it is a bottleneck, we conducted a different type of test by having
two desktops drive 100 user tests simultaneously and compared the results to a 200 user test
from a single desktop.

The single-node jMeter machine test completed in about 10 minutes.
	One machine simulating 200 users.

The dual-node jMeter machine test completed in about 20 minutes.
	Two machines simulating 100 users each.

The difference in test times appears to be due to the jMeter testing servers being twice as
responsive and thus applying twice the load on the servers being tested and thus the overall
completion time of the 2 node test doubled.

This indicates to me that the jMeter machine itself is a bottleneck that needs to be factored
into testing results and its impact should be understood and reduced as much as possible.
Reducing the jMeter bottleneck will likely open the flood gates and allow more concurrent
load to be applied to the environment you are testing.

Hope this helps.


-----Original Message-----
From: []
On Behalf Of Deepak Shetty
Sent: Tuesday, June 23, 2009 12:10 PM
To: JMeter Users List
Subject: Re: Variances between automated and manual tests

No rule of thumb  , I just watch the perf counters and see that the cpu load
is light to moderate and that memory doesn't exceed the physical ram on the

On Tue, Jun 23, 2009 at 9:02 AM, Scott McFadden

> What is the recommended maximum number of simulated jmeter users / threads
> per machine?  Assuming Windows multi core environments.
> -----Original Message-----
> From: Deepak Shetty []
> Sent: Tuesday, June 23, 2009 10:57 AM
> To: JMeter Users List
> Subject: Re: Variances between automated and manual tests
> hi
> Some things to think about
> a. Are you including all resources in your jmeter tests (e.g. all embedded
> resources like css/images/javascript which may or may not be cached by the
> browser).
> b. Do you have assertions for all your tests that validate that your
> response is as expected (no error messages , expected text etc?)
> c. Have you suitably parameterised your jmeter tests (say for e.g. using
> different users so that if your application is caching some bits per user ,
> then you dont get better results than you should)
> d. Have you distributed your tests across multiple machines correctly (500
> concurrent users wont normally be supported by a single client machine)?
> regards
> deepak
> On Tue, Jun 23, 2009 at 8:04 AM, Badeau, Kevin (DPH) <
>> wrote:
> > Hello folks,
> >
> >
> >
> > We are using jMeter to capacity test an application we are considering
> > purchasing.
> >
> >
> >
> > When we ramp up to 500 concurrent users jMeter is reporting response
> times
> > under 5 seconds and it appears it is stepping through all the
> functionality
> > we are asking it to do.
> >
> >
> >
> > This is very acceptable for us.
> >
> >
> >
> > However, while the test is running we try to hit the application manually
> > and we find it is unresponsive.
> >
> >
> >
> > Manual testing is quick outside of a concurrent jMeter test running.
> >
> > Manual testing performance degrades as we in range from 100 to 200
> > concurrent users.
> >
> > Manual testing is unresponsive when we run 500 concurrent users.
> >
> >
> >
> > jMeter reports response times only degrade by about a ½ second for each
> > level of concurrent users we try.
> >
> >
> >
> > There seems to be some wide performance variance from what jMeter is
> seeing
> > vs. what we see manually.
> >
> >
> >
> > I'm wondering if anyone has any general suggestions as to why this might
> be
> > or how we might go about isolating this anomaly.
> >
> >
> >
> > I can provide more specific details if needed but I feel the question is
> > pretty basic in terms what we understand the tool is supposed to be
> > accomplishing in simulating real world scenarios and benchmarking them.
> >
> >
> >
> > Thanks in advance.
> >
> >
> >
> > Kevin
> >
> >
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message