jmeter-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sebb <>
Subject Re: Correct configuration of JMeter for testing TPS allocated
Date Mon, 16 Jan 2017 15:13:03 GMT
On 16 January 2017 at 13:26, alexk <> wrote:
> Hello,
> I was given access to a web service API that implements a throttling policy
> per user account. Each account has an allocated TPS. In my case the
> allocated TPS is 80.
> The problem is that for many requests that I make to the API I get: "HTTP
> Error 429 -- Too many requests try back in 1 second" even when I set my
> client's TPS to 50.
> I am pretty confident about the throttling I am performing on my end (used
> both Thread sleep and Guava's RateLimiter to test) and when I brought this
> up with the API owner they asked me to test with JMeter as they have done
> and they have certified that their API correctly implements TPS allocation.
> This is what I did. Testing from my production server's CLI with the
> configuration I have attached (http_req.jmx) I received again 429 errors.
> The way I did the test is that I created a thread group with:
> Number of threads: 10 (which is the number of threads my Java client's
> threadpool has configured)
> Ramp-up Period: 10
> Loop count: Forever
> And a timer:
> Constant Throughput Timer: 4800

That looks fine, assuming the timer was set to calculate the
throughput based on all the threads, not per thread.

> They told me that they did the test in a different way:
> Number of threads: 80
> Ramp-up Period: 1
> Loop count: 1
> I believe that their approach is not the correct way to perform the test as
> it does only one time the request and for my case I face the issue after a
> while. However, given my very limited exposure to JMeter I am not confident
> enough about my claims and approach either.

Their approach means the max throughput will depend on how quickly
their server and JMeter get warmed up.

It's hardly ever correct to use a loop count of 1.

> Could someone confirm whether my approach is correct or indicate where I am
> wrong. Also I would really appreciate if someone could give a brief
> explanation to convey to the owner's testing team about any (if there are
> any) flaws in their approach of using JMeter testing.

See above.

I had a quick look at the attached JMX.
I would recommend disabling/removing the Tree View and Table Listeners
as they are expensive.
Add a Summary Listener instead, as that will show the throughput.

Also you can replace the HTTP Sampler with a Java Request sampler.
Set the sleep time/mask according to the expected response times from
the server.

You can then run the test and see how JMeter behaves.

I just tried using the default sleep settings and it took quite a
while for the throughput rate to build up to 40.

If you know how long the test takes and how many requests were
serviced you can manually calculate the average throughput as a cross

As I recall, the Summary listener calculates the cumulative rate
rather than the peak rate, so I suppose it would be possible for the
server to see a temporary overload if it used a short measurement

If you record basic test results (start time and elapsed should be
enough) you can process the file to measure the gaps between the
samples, in case that is how they are measuring TPS.

> Thank you in advance
> Alex
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message