arrow-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wes McKinney <wesmck...@gmail.com>
Subject Re: [Discuss] Benchmarking infrastructure
Date Mon, 01 Apr 2019 16:05:04 GMT
hi David -- yes, we definitely should set up cross-host and
cross-implementation performance testing (that we can measure and
record in the benchmark database) for Flight. As one starting point

https://issues.apache.org/jira/browse/ARROW-4566

- Wes

On Mon, Apr 1, 2019 at 10:30 AM David Li <li.davidm96@gmail.com> wrote:
>
> One more thought, is there interest in running cross-host Flight
> benchmarks, and perhaps validating them against iperf or a similar
> tool? It would be great to get latency/throughput numbers and make
> sure upgrades to gRPC don't tank performance on accident, and it would
> help argue for why people should use Flight.
>
> I assume localhost benchmarks with Flight would just work with the
> existing benchmark infrastructure, as a starting point.
>
> It might also be interesting to benchmark Flight implementations
> against each other. This all probably fits a general need for more
> Flight tests/benchmarks.
>
> Best,
> David
>
> On 3/30/19, Antoine Pitrou <antoine@python.org> wrote:
> >
> > Le 29/03/2019 à 16:06, Wes McKinney a écrit :
> >>
> >>> * How to make it available to all developers? Do we want to integrate
> >>> into CI or not?
> >>
> >> I'd like to eventually have a bot that we can ask to run a benchmark
> >> comparison versus master. Reporting on all PRs automatically might be
> >> quite a bit of work (and load on the machines)
> >
> > We should also have a daily (or weekly, but preferably daily IMO) run of
> > the benchmarks on latest git master.  This would make it easy to narrow
> > down the potential culprit for a regression.
> >
> > Regards
> >
> > Antoine.
> >

Mime
View raw message