jmeter-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gil Tene <...@azulsystems.com>
Subject Re: Coordinated Omission - detection and reporting ONLY
Date Sat, 19 Oct 2013 01:17:28 GMT

> [Trying again - please do not hijack this thread.]
> 
> The Constant Throughput Timer (CTT) calculates the desired wait time,
> and if this is less than zero - i.e. a sample should already have been
> generated - it could trigger the creation of a failed Assertion (or similar)
> showing the time difference.
> 
> Would this be sufficient to detect all CO occurrences?

Two issues:

1. It would detect that CO probably happened, but not how much of it happened, Missing 1msec
or 1 minute will look the same.

2. It would only detect CO in test plans that include an actual CTT (Constant Throughput Timer).
It won't work for other timers, or when no timers are used

> If not, what other metric needs to be checked?

There are various things you can look for.

Our OutlierCorrector work includes a pretty elaborate CO detector. It sits either inline on
the listener notification stream, or parses the log file. The detector identifies sampler
patterns, establishes expected interval between patterns, and detects when the actual interval
falls far above the expected interval. This is a code change to JMeter, but a pretty localized
one.

> Even if it is not the only possible cause, would it be useful as a
> starting point?

Yes. As a warning flag saying "throw away these test results".

> I am assuming that the CTT is the primary means of controlling the
> sample request rate.

Unfortunately many test scenarios I've seen use other means. Many people use other timers
or other means for think time emulation.

> If there are other elements that are commonly used to control the
> rate, please note them here.
> 
> N.B: this thread is only for discussion of how to detect CO and how to
> report it.

Reporting the existence of CO is an interesting starting point. But the only right way to
deal with such a report showing the existence of CO (with no magnitude or other metrics) is
to say " I guess the data I got is complete crap, so all the stats and graphs I'm seeing mean
nothing".

If you can report "how much" CO you saw, it may help a bit in determining how bad the data
is, and how the stats should be treated by the reader. E.g. if you know that CO totaling some
amount of time X in a test of length Y had occured, then you know that any percentile above
(100 * (1-X)/Y) is completely bogus, and should be assumed to be equal to the experienced
max value. You can also take the approach that the the rest of the percentiles should be shifted
down by at least  (100 * X / Y). e.g. If you had CO that covered only 0.01% of the total test
time, that would be relatively good news. But if you had CO covering 5% of the test time,
your measured 99%'ile is actually the 94%'ile]. Averages are unfortunately anyone's guess
when CO is in play and not actually corrected for. 

Once you detect both the existence and the magnitude of CO, correcting for it is actually
pretty easy. The detection of "how much" is the semi-hard part.

> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
> For additional commands, e-mail: user-help@jmeter.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
For additional commands, e-mail: user-help@jmeter.apache.org


Mime
View raw message