jmeter-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kirk <kirk.pepperd...@gmail.com>
Subject Re: determining ramp-up period
Date Thu, 23 Jun 2011 23:06:54 GMT
Hi Barrie,


On Jun 24, 2011, at 12:34 AM, Barrie Treloar wrote:

> On Fri, Jun 24, 2011 at 2:50 AM, Kirk <kirk.pepperdine@gmail.com> wrote:
>> If I'm expecting an incoming tx rate of 200 requests per second and JMeter doesn't
have the threads to sustain it.. then I would consider JMeter to be a bottleneck in the test.
This is because the artificially throttling of the overall system (thread starvation in JMeter)
can result a load pattern that doesn't properly expose an underlying bottleneck. This is what
I've run into in a couple of accounts. Problem in these cases is that developers are looking
into the application and not seeing where the real problem is.
> 
> I'm newish to the list, so I haven't seen this discussion before.
> Can you elaborate some more about why the Thread starvation occurs?

Starvation occurs when JMeter cannot launch a new request to maintain a desired transactional
rate because the thread that it would use to do so is tied up in the server. This is because
the looping thread architecture in the tool is actually causing JMeter to act as a closed
system. In a close system, transaction rate is controlled by the rate of exit from the system
(server in this case). Example, think how you'd model a call center. The rate at which in-coming
calls are handled is throttled by the rate at which operators can clear calls. IOWs, this
is a self throttling system. If we are trying to model an open system (most systems I test)
then rate of in-coming should not be tied to rate at which requests leave the system.
> 
>> The other issue is that it's hard to setup a JMeter script so that it sustains an
expected workload on a server. This is why I've suggested, teach, demonstrate and continue
to use ThreadGroup in a way that you yourself called "bizarre". Yet using that model I'm able
to simulation 1000s of users in a single JMeter instance all doing the right thing (or as
much of a right thing as the current HTTP samplers allow for). And yup, I've got a ThreadGroup
replacement sketched out on my whiteboard, now to find some cycles to make it real. I think
it should eliminate the need for the constant throughput timer (but, who knows ;-)).
> 
> And same with this one, what do you do differently with ThreadGroups and why?

To model an open system with JMeter I calculate the number of business transactions I want
to execute in a given amount of time. I set the number of users in the thread group to that
number. The rampup time is set to the total test time and iterations is set to 1. Threads
start on the ramp-up schedule but once they finished the transaction defined in the thread
group, they die. So rate at which thread enter the system is total number of thread / ramp
up time. This is a problem because it releases threads on a pulse which is unnatural (in most
systems) so to counter that I add a random pause at the top of the thread group. The duration
is 0-(3x average desired inter-requrest arrival rate) ms. This value seems to be working reasonably
well. It's represent a time spread that is about 3 standard deviations on a normal curve.
Inter-request arrival times is generally modeled using a Possion distribution but flat random
setup this way has worked reasonably well. In my demo app where I demonstrate adaptive sizing
(Java memory pools) I run 750 threads through in 150 seconds (ok, too short for a serious
bench but this is a demo and I need to run it several times so....) but without adaptive sizing
I can see steady state with about 380 active threads. With adaptive sizing turned on, I see
less than 1/2 of that value at steady state. I wouldn't begin to know how to configure JMeter
to provide a load that created the same problem using a traditional thread group configuration.
As soon as GC kicks in, response times jump which will result in JMeter applying less load
which will in turn cause response times to drop as GC catches up and .. well... you see the
cycle. With the 750 threads in 150 seconds, even pressure is applied which causes hoptspot
metrics to be even which results in the appropriate adaptive sizing policy to be applied.
EOS.
> 
>> Also, it would be really really nice to normalize the priority behavior of some of
the components such as the timers. IME, how timers work is 1) not intuitive and hence difficult
for newbies to get right and 2) creates extra work trying to get the timers to behave (i.e,
the test action or simple controller hack around).
> 
> I'm definitely interested in this, what specifically about priorities.
> I'm in the camp of hacking the timers to get it to behave "correctly".
> At least its better than the perl script we've currently got that
> calculates a bunch of values to try to set ramp up/throughput times to
> get what we are looking for.

I'm not sure what you're looking for here. I just bury one of the existing timers. The timers
work very well, it's just the order of execution due to priorities that people find odd. I'm
used to them but since I also teach people how to use JMeter and I use it in demo's fairly
often I get an up close and personal wrt people's pain points in understanding how this beast
flies. Testing is hard enough ;-)

That said, I respect Sebb's work and commitment to this group. I've been following for quite
some time and I'd say it's amazing so I do want to be respectful of that.

Regards,
Kirk


Regards,
Kirk


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Mime
View raw message