jmeter-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adrian Speteanu <asp.ad...@gmail.com>
Subject Re: Load testing, Continuous Integration, failing on build-over-build degradation
Date Mon, 15 Jul 2013 20:20:36 GMT
On Mon, Jul 15, 2013 at 11:01 PM, Cedrick Johnson <
cjohnson@backstopsolutions.com> wrote:

> When you configure your JMeter Jenkins job, in Post-Build actions you can
> have it publish the performance test result report which points to the Test
> Results .jtl file that is generated when running the test. In that report,
> there's a Performance Threshold section where you can set it to identify
> when the build is unstable (number of errors exceeds this percentage
> amount) or build Failed when the number of errors exceeds this set amount.
>
> The errors are determined in your actual load test, i.e. if requests start
> timing out, or other conditions that you are checking in your tests begin
> failing they will count against this threshold and Jenkins will alert you
> to a degradation in performance once those errors are met.
>
> -c
>
>
+1
We did this for functional tests, covered known bugs as well and only
wanted to mark as unstable, instead of completely failing the job.


> -----Original Message-----
> From: Shmuel Krakower [mailto:shmulikk@gmail.com]
> Sent: Monday, July 15, 2013 1:54 PM
> To: JMeter Users List
> Subject: Re: Load testing, Continuous Integration, failing on
> build-over-build degradation
>
> Hi Adrian
> Thanks for sharing but how exactly u control the response times thresholds
> or error rates?
> I cannot find any control of this...
>  On Jul 15, 2013 4:26 PM, "Adrian Speteanu" <asp.adieu@gmail.com> wrote:
>
> > Hi,
> >
> > Check my attempt of an answer bellow.
> >
> > Regards,
> > Adrian S
> >
> > On Mon, Jul 15, 2013 at 2:56 PM, Marc Esher <marc.esher@gmail.com>
> wrote:
> >
> > > Greetings all,
> > >
> > > I'm integrating our load tests into our CI environment, with the
> > > goal of identifying performance degradation as soon as possible. The
> > > idea is is
> > to
> > > use some kind of threshold, from one CI build to the next, to
> > > identify
> > when
> > > performance has dipped to an unacceptable level from one run to
> another.
> > >
> > > I'm using Jenkins, currently.
> > >
> > > Anyone have any guidance, strategy, experience, wisdom here?
> > >
> > > The Jenkins Performance Plugin is decent for reporting trends, but
> > > it has no capabilities to automatically spot problems.
> > >
> >
> > What is your exact expectation regarding to this last phrase?
> >
> > I'm currently using the maven plugin, and it integrates nicely with
> > the jenkins plugin that you mentioned. The tests fail when expected.
> > Here are the configurations made to the pom.xml (I followed the
> > tutorial from the jenkins plugin project when first setting up this
> > test project). The threshold for failures are set in the jenkins plugin
> and they work.
> >
> >                 <groupId>com.lazerycode.jmeter</groupId>
> >                 <artifactId>jmeter-maven-plugin</artifactId>
> > ...
> >                 <executions>
> >                     <execution>
> >                         <id>jmeter-tests</id>
> >                         <phase>verify</phase>
> >                         <goals>
> >                             <goal>jmeter</goal>
> >                         </goals>
> >                     </execution>
> >                 </executions>
> >
> > execution: #mvn clean verify
> >
> >
> > > Thanks!
> > >
> > > Marc
> > >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
> For additional commands, e-mail: user-help@jmeter.apache.org
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message