ignite-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sergey Kozlov (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (IGNITE-3756) Yardstick benchmark roadmap
Date Wed, 24 Aug 2016 13:54:21 GMT

     [ https://issues.apache.org/jira/browse/IGNITE-3756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Sergey Kozlov updated IGNITE-3756:
----------------------------------
    Description: 
Now I'm trying to collect most important things that can significantly improve it and make
Yardstick development/use more clear and simpler.

h3. 1. Usage

h4. 1.1 Deploy script/mode over network must be provided. 
Localhost benchmark has to require no ssh connection. 
It's really weird that key SSH authentication and SSH server is running required to run benchmarks
on local laptop.

h4. 1.2 One script for deploy, run servers, run drivers, collect logs from servers, generate
HTML report. 
It should have minimal set of options: path to configuration and the operations. For instance:
make everything: {{./becnhmark.sh --config=ignite-local-benchmarks.xml}}
deploy only: {{./becnhmark.sh --config=ignite-local-benchmarks.xml --operation=deploy}}

h4. 1.3 Introduction one XML configuration.
Instead of {{benchmark.properties}}, {{ignite.xml}} and command line options for a benchmarks
just put ones into a spring XML file. 
For instance it may be org.apache.ignite.yardstick.configuration for common parameters and
org.apache.ignite.yardstick.configuration.<benchmark name> for specific parameters

h4. 1.4 Driver logging. 
Driver(client) should provide more details why it can't connect to servers, deploy code and
etc, what it is doing right now. Key points:
a) No key SSH authentication configured
b) Can't connect to servers
c) Print out "alive" message (e.g. "NNNNN operations performed for <benchmark name>")
every X seconds (configuratin option) to prevent late detecting of deadlocks, hangs
d) Print out average numbers at end of benchmark run (which usually calculated by jfreechart).


h3. 2. Deploy binaries

Add the maven profile that will include yardstick in fabric binary package (at least for Apache
Ignite). It will provide the yardstick 
out-of-box and will make easier for first run.

h3. 3. Sources

Is there any reason why at the moment we don't include the benchmark code for competitors
like HZ/Coherence into Apache Ignite Git reporsitory?
Time to time we can't even compile the yardstick-<competitor> due uncompatible changes
and etc. 
Put such benchmark code into Apache Ignite yardsctick module will make maintaining the code
up-to-date, compilable and runnable.

h3. 4. New benchmark introduction

h4. 4.1 General rules.
We should to create rules to introduction of a new benchmark. Now I see that some benchmarks
have hardcoded options, others add new command-line arguments, config directory like a bin.
It should be splitted somehow.
Ideally any commit should be reviewed as we do it for product code.
h4. 4.3 DB comparison.
Most proper way to compare DB and Ignite is to design a set of TPC-X benchmarks


  was:
Now I'm trying to collect most important things that can significantly improve it and make
Yardstick development/use more clear and simpler.

h3. 1. Usage

h4. 1.1 Deploy script/mode over network must be provided. 
Localhost benchmark has to require no ssh connection. 
It's really weird that key SSH authentication and SSH server is running required to run benchmarks
on local laptop.

h4. 1.2 One script for deploy, run servers, run drivers, collect logs from servers, generate
HTML report. 
It should have minimal set of options: path to configuration and the operations. For instance:
make everything: {{./becnhmark.sh --config=ignite-local-benchmarks.xml}}
deploy only: {{./becnhmark.sh --config=ignite-local-benchmarks.xml --operation=deploy}}

h4. 1.3 Introduction one XML configuration.
Instead of {{benchmark.properties}}, {{ignite.xml}} and command line options for a benchmarks
just put ones into a spring XML file. 
For instance it may be org.apache.ignite.yardstick.configuration for common parameters and
org.apache.ignite.yardstick.configuration.<benchmark name> for specific parameters

h4. 1.4 Driver logging. 
Driver(client) should provide more details why it can't connect to servers, deploy code and
etc, what it is doing right now. Key points:
a) No key SSH authentication configured
b) Can't connect to servers
c) Print out "alive" message (e.g. "NNNNN operations performed for <benchmark name>")
every X seconds (configuratin option) to prevent late detecting of deadlocks, hangs
d) Print out average numbers at end of benchmark run (which usually calculated by jfreechart).


h3. 2. Deploy binaries

Add the maven profile that will include yardstick in fabric binary package (at least for Apache
Ignite). It will provide the yardstick 
out-of-box and will make easier for first run.

h3. 3. Sources

Is there any reason why at the moment we don't include the benchmark code for competitors
like HZ/Coherence into Apache Ignite Git reporsitory?
Time to time we can't even compile the yardstick-<competitor> due uncompatible changes
and etc. 
Put such benchmark code into Apache Ignite yardsctick module will make maintaining the code
up-to-date, compilable and runnable.

h3. 4. New benchmark introduction

We should to create rules to introduction of a new benchmark. Now I see that some benchmarks
have hardcoded options, others add new command-line arguments, config directory like a bin.
It should be splitted somehow.
Ideally any commit should be reviewed as we do it for product code.




> Yardstick benchmark roadmap
> ---------------------------
>
>                 Key: IGNITE-3756
>                 URL: https://issues.apache.org/jira/browse/IGNITE-3756
>             Project: Ignite
>          Issue Type: Task
>          Components: yardstick
>            Reporter: Sergey Kozlov
>
> Now I'm trying to collect most important things that can significantly improve it and
make Yardstick development/use more clear and simpler.
> h3. 1. Usage
> h4. 1.1 Deploy script/mode over network must be provided. 
> Localhost benchmark has to require no ssh connection. 
> It's really weird that key SSH authentication and SSH server is running required to run
benchmarks on local laptop.
> h4. 1.2 One script for deploy, run servers, run drivers, collect logs from servers, generate
HTML report. 
> It should have minimal set of options: path to configuration and the operations. For
instance:
> make everything: {{./becnhmark.sh --config=ignite-local-benchmarks.xml}}
> deploy only: {{./becnhmark.sh --config=ignite-local-benchmarks.xml --operation=deploy}}
> h4. 1.3 Introduction one XML configuration.
> Instead of {{benchmark.properties}}, {{ignite.xml}} and command line options for a benchmarks
just put ones into a spring XML file. 
> For instance it may be org.apache.ignite.yardstick.configuration for common parameters
and
> org.apache.ignite.yardstick.configuration.<benchmark name> for specific parameters
> h4. 1.4 Driver logging. 
> Driver(client) should provide more details why it can't connect to servers, deploy code
and etc, what it is doing right now. Key points:
> a) No key SSH authentication configured
> b) Can't connect to servers
> c) Print out "alive" message (e.g. "NNNNN operations performed for <benchmark name>")
every X seconds (configuratin option) to prevent late detecting of deadlocks, hangs
> d) Print out average numbers at end of benchmark run (which usually calculated by jfreechart).

> h3. 2. Deploy binaries
> Add the maven profile that will include yardstick in fabric binary package (at least
for Apache Ignite). It will provide the yardstick 
> out-of-box and will make easier for first run.
> h3. 3. Sources
> Is there any reason why at the moment we don't include the benchmark code for competitors
like HZ/Coherence into Apache Ignite Git reporsitory?
> Time to time we can't even compile the yardstick-<competitor> due uncompatible
changes and etc. 
> Put such benchmark code into Apache Ignite yardsctick module will make maintaining the
code up-to-date, compilable and runnable.
> h3. 4. New benchmark introduction
> h4. 4.1 General rules.
> We should to create rules to introduction of a new benchmark. Now I see that some benchmarks
have hardcoded options, others add new command-line arguments, config directory like a bin.
It should be splitted somehow.
> Ideally any commit should be reviewed as we do it for product code.
> h4. 4.3 DB comparison.
> Most proper way to compare DB and Ignite is to design a set of TPC-X benchmarks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message