jmeter-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Glenn Caccia <gacac...@yahoo.com.INVALID>
Subject Thoughts on InfluxDB/Grafana integration
Date Fri, 10 Apr 2015 17:38:32 GMT
 I've successfully installed InfluxDB and Grafana and did some basic testing where I can now
see results in Grafana.  I'm beginning to wonder about the benefits of this system.  A while
ago I had toyed around with the idea of using Elasticsearch as a backend for JMeter test results
and using Kibana to view results.  I ultimately dropped the idea because of the limitations
of how data is structured.  I see the exact same issue with InfluxDB and Grafana (either
that, or I don't fully understand what can be done in these tools).
What I want when viewing results is the ability to work with results in terms of projects,
test plans, and results from a particular test run.  For example, I want to see results for
project A, test plan B and compare results from the prior run with the current run.  With
InfluxDB/Grafana solution, there is no concept of a run.  If I run a test one day and then
run the same test the subsequent day, I can't compare the results using the same view.  I
can certainly change my time filter to see both inline (with a big gap inbetween) or view
one and then view the other, but I can't stack them in separate graphs and see them at the
same time or display them in the same graph.  Likewise, if I want to see what performance
was like the last time a test was run and I don't know when the last test was run, I have
to do a bit of searching by playing with the time filter.
A while ago I worked for a company that used SQL Server for a lot of their data storage needs. 
This gave me access to the SQL Server Report Builder tool.  I was able to create a solution
where JMeter results were loaded into SQL Server and we had a report interface where you could
choose your project, choose your test plan and then see the dates/times for all prior runs. 
From this, you could choose which run(s) to view.  I don't have access to tools like that
with my current company, but I miss that kind of ability to structure and access test results. 
A similar approach to storing and presenting results can be seen with loadosphia.
In short, it seems like this new solution is primarily useful for analyzing results from a
current test run (which can already be done with existing listeners) but is not as useful
a tool for comparing results or checking on results from prior runs.  Am I missing something
or is that a fair conclusion?

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message