jmeter-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sebb <>
Subject Re: constant rate testing (again)
Date Sun, 08 Aug 2010 18:21:15 GMT
On 28 July 2010 08:13, Felix Frank <> wrote:
> Hi Deepak,
> all of the below is true and quite accurate. The trouble with Jmeter is
> that it is too "patient", and even starting 1000 threads or more won't
> inject the same level of stress to on your server as a couple hundred
> real world users would. That's because Jmeter will gladly stand by for
> minutes at a time. Finding your throughput plateau is fine and all, but
> it would be nice if I could wreck the webserver the same way a swarm of
> real users will.

What you appear to be saying is that real users will give up waiting
for a response if it takes too long, and resubmit the request.
Is this really how users behave? I would have expected them to do
something else, and try again later.

But if your users really do keep hitting the unresponsive server, then
by all means use timeouts.

Also consider adding an Assertion to fail any samples that take too long.

If the server does not respond sufficiently quickly under load, then
that is a problem that needs to be addressed.

> Regards,
> Felix
> On 07/27/2010 10:57 PM, Deepak Goel wrote:
>> Hey
>> Namaskara~Nalama~Guten Tag
>> Just another though to this:
>> If your load is reaching the servers, looks like the max load which your
>> server system can handle is that of one Jmeter server. When you add more
>> servers, the throughput will reduce as the max throughput of the system has
>> already been reached. After the max throughput has been reached, if you
>> increase the load, the throughput starts dropping as your server cannot
>> handle so many concurrent sessions simultaneously creating an overhead on
>> the execution of all the request in the system.
>> For any system, you have to know what is the max throughput which it can
>> achieve beyond which the response time starts increasing exponentially. The
>> throughput then reaches a plateau, and if you increase the load further the
>> throughput would start decreasing and the system might even crash.
>> I guess thats what happens in real world scenarios too. For example: In
>> normal shopping periods, the system is able to manage the real user load
>> with reasonable response times. During festive times, the system gets too
>> drained out with the incoming request, and the response time increases
>> exponentially. This causes a constant throughput and sometimes even the
>> system to crash.
>> Did you try this option?
>> *****************************************************
>> Or is all of this complicated setup == a
>>> large thread group + long ramp up period?
>> *****************************************************
>> Deepak
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message