arrow-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Melik-Adamyan, Areg" <areg.melik-adam...@intel.com>
Subject [Discuss] Benchmarking infrastructure
Date Fri, 29 Mar 2019 06:25:15 GMT
Back to the benchmarking per commit.

So currently I have fired a community TeamCity Edition here http://arrow-publi-1wwtu5dnaytn9-2060566241.us-east-1.elb.amazonaws.com
and dedicated pool of two Skylake bare metal machines (Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz)
This can go to up to 4 if needed. 
Then the machines are prepared for benchmarking in the following way:
- In BIOS/Setup power saving features are disabled
- Machines are locked for access using pam_access
- Max frequency is set through  cpupower  and in /etc/sysconfig/cpupower
- All services that are not needed switched off: > uptime 23:15:17 up 26 days, 23:24, 
1 user,  load average: 0.00, 0.00, 0.00
- Transparent huge pages set on demand cat /sys/kernel/mm/transparent_hugepage/enabled
always [madvise] never
- audit control switched off auditctl -e 0
- Memory clean added to launch scripts echo 3 > /proc/sys/vm/drop_caches
- pstate=disable added to the kernel config

This config is giving relatively clean and not noisy machine. 
Commits in master trigger build and ctest -L benchmarks. Output is parsed.

What is missing:
* Where should our Codespeed database reside? I can fire-up a VM and put it there, or if you
have other preferences let's discuss.
* What address should it have?
* How to make it available to all developers? Do we want to integrate into CI or not?
* What is the standard benchmark output? I suppose Googlebench, but lets state that.
* My interest is the C++ benchmarks only for now. Do we need to track all benchmarks?
* What is the process of adding benchmarks? 

Anything else for short term?

-Areg.





Mime
View raw message