jmeter-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Glenn Caccia <gacac...@yahoo.com.INVALID>
Subject Re: Thoughts on InfluxDB/Grafana integration
Date Mon, 13 Apr 2015 16:28:37 GMT
I wonder if this might be solved using Grafana scripted dashboards.  That might also be a
better solution than Chaitanya's Grafana Dashboard Generator.  That is, rather than having
an external tool that generates a dashboard, if JMeter could itself trigger the creation of
the dashboard using the Grafana scripted dashboard capability, then the dashboard could be
created with the precise start and end times of the run.  It would be nice if this all happened
inside JMeter instead of needing to run a different tool.  Maybe best to generate two versions
of the dashboard, a "live" version with no time filtering so you can follow results as the
test is running and a historical one when the run finished that has queries hard-coded for
the appropriate start and end times.  Just a thought.  I'm going to play around with this
scripted dashboard stuff and see what it can do.

      From: Glenn Caccia <gacaccia@yahoo.com.INVALID>
 To: JMeter Users List <user@jmeter.apache.org> 
 Sent: Monday, April 13, 2015 8:58 AM
 Subject: Re: Thoughts on InfluxDB/Grafana integration
   

Not really.  It certainly is a nice tool for quickly creating a new dashboard for a new script. 
However, it doesn't address the issue of comparing results from different runs for the same
script.  There are other ways of dealing with that, taking screen shots after each run and
putting then in a common doc, for example, but ideally your reporting solution would be a
one-stop shop for all your reporting needs.
  

    From: Bob <b.meliev@gmail.com>


 To: JMeter Users List <user@jmeter.apache.org> 
 Sent: Sunday, April 12, 2015 8:43 PM
 Subject: Re: Thoughts on InfluxDB/Grafana integration
  
Glenn, I think Chaitanya's Grafana Dashboard generator solves the problem?

https://github.com/bhattchaitanya/Grafana-Dashboard-Generator/wiki





On 11/04/15 01:45, Glenn Caccia wrote:
> Thinking about this more, you could use a dynamic rootMetricsPrefix, something like..
>
> jmeter.${__TestPlanName}.${__time}.
>
> That could then be used across all scripts and would satisfy the basic requirement from
a storage perspective, but Grafana itself still can't easily handle the requirement from a
display perspective.  Since queries are hard coded into a graph, you'd be stuck either needing
to make a new dashboard for each test run or manually editing a dashboard for each test run. 
It would be a mess to work with.
>        From: Glenn Caccia <gacaccia@yahoo.com.INVALID>
>  To: JMeter Users List <user@jmeter.apache.org>
>  Sent: Friday, April 10, 2015 1:23 PM
>  Subject: Re: Thoughts on InfluxDB/Grafana integration
>    
>
> You could do that, but it would then require remembering to change the root value each
time you did a new run, which would then also require changing your dashboard queries each
time to pick up on the new run.  I don't think that's a solution I would want to maintain. 
I would definitely use variations on the rootMetricsPrefix to distinguish between test scripts,
however.  The InfluxDB/Grafana solution is great for real-time analysis, which is certainly
important, but seems to fall short on the need to easily compare runs.
>        From: Philippe Mouawad <philippe.mouawad@gmail.com>
>
>
>  To: JMeter Users List <user@jmeter.apache.org>
>  Sent: Friday, April 10, 2015 11:54 AM
>  Subject: Re: Thoughts on InfluxDB/Grafana integration
>    
> Hi,
> What about playing on rootMetricsPrefix to do that ?
>
> Regarding SQL, do you know that you can now easily build a jdbc backend to
> store results in a database, you could contribute this to core.
>
>
> Regards
>
>
>
> On Friday, April 10, 2015, Glenn Caccia <gacaccia@yahoo.com.invalid> wrote:
>
>>    I've successfully installed InfluxDB and Grafana and did some basic
>> testing where I can now see results in Grafana.  I'm beginning to wonder
>> about the benefits of this system.  A while ago I had toyed around with the
>> idea of using Elasticsearch as a backend for JMeter test results and using
>> Kibana to view results.  I ultimately dropped the idea because of the
>> limitations of how data is structured.  I see the exact same issue with
>> InfluxDB and Grafana (either that, or I don't fully understand what can be
>> done in these tools).
>> What I want when viewing results is the ability to work with results in
>> terms of projects, test plans, and results from a particular test run.  For
>> example, I want to see results for project A, test plan B and compare
>> results from the prior run with the current run.  With InfluxDB/Grafana
>> solution, there is no concept of a run.  If I run a test one day and then
>> run the same test the subsequent day, I can't compare the results using the
>> same view.  I can certainly change my time filter to see both inline (with
>> a big gap inbetween) or view one and then view the other, but I can't stack
>> them in separate graphs and see them at the same time or display them in
>> the same graph.  Likewise, if I want to see what performance was like the
>> last time a test was run and I don't know when the last test was run, I
>> have to do a bit of searching by playing with the time filter.
>> A while ago I worked for a company that used SQL Server for a lot of their
>> data storage needs.  This gave me access to the SQL Server Report Builder
>> tool.  I was able to create a solution where JMeter results were loaded
>> into SQL Server and we had a report interface where you could choose your
>> project, choose your test plan and then see the dates/times for all prior
>> runs.  From this, you could choose which run(s) to view.  I don't have
>> access to tools like that with my current company, but I miss that kind of
>> ability to structure and access test results.  A similar approach
>> to storing and presenting results can be seen with loadosphia.
>> In short, it seems like this new solution is primarily useful for
>> analyzing results from a current test run (which can already be done with
>> existing listeners) but is not as useful a tool for comparing results or
>> checking on results from prior runs.  Am I missing something or is that a
>> fair conclusion?
>>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@jmeter.apache.org
For additional commands, e-mail: user-help@jmeter.apache.org





  
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message