httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Daniel Ruggeri <DRugg...@primary.net>
Subject Re: Autobuild Progress (was Re: Automated tests)
Date Sun, 05 Feb 2017 15:13:42 GMT

On 1/31/2017 4:30 PM, Jacob Champion wrote:
> On 01/30/2017 05:39 PM, Daniel Ruggeri wrote:
>> I'm tremendously inspired by this work. What are your thoughts on the
>> idea of having a series of docker container builds that compile and run
>> the test suite on various distributions? I'll volunteer to give this a
>> whack since it's something that's been in the back of my mind for a long
>> while...
>
> I think that would be awesome. The cheaper we can make new test
> distributions, the easier we can test all sorts of different
> configurations (which, given how many knobs and buttons we expose, is
> important).
>
> I don't know how much of Infra's current Puppet/Buildbot framework is
> Docker-friendly, but if there's currently no cheap virtualization
> solution there for build slaves, then anything we added would
> potentially be useful for other ASF projects as well. Definitely
> something to start a conversation over.
>

Yes, definitely. Thinking more about this, even adding something
heavyweight like a type 2 hypervisor could potentially provide value so
long as the VM image is stripped down enough and we don't leave junk
behind on the slave. I'm not concerned about Puppet and buildbot
integration since Puppet is a great way to manage the configuration of
the slave (assuming that's what it's used for) which makes it easy to
have docker, virtualbox, vagrant or whatever installed and configured.

As far as buildbot, I'm sure it will support the execution of a script
which is all that's needed. My latest work with the
RemoteIPProxyProtocol stuff has me compiling httpd on my build machine
and standing up a docker container with haproxy inside. Hitting the
resulting build under various circumstances with wget scratches the
itch. I've got this distilled down into only four files (Dockerfile,
haproxy.cfg, setup script and test script). This is nice because...
well... I just don't want to install haproxy on my build box for this

In any event, I've started the conversation with builds@a.o to see
what's doable. Can crosspost or just return with feedback when I hear.


> (Side thought: since raw speed is typically one of the top priorities
> for a CI test platform, we'd have to carefully consider which items we
> tested by spinning up containers and which we ran directly on a
> physical machine. Though I don't know how fast Docker has gotten with
> all of the fancy virtualization improvements.)

Amen to that. Docker's quite fast since lxc and all the stuff around it
are very lightweight. The slowest parts are pulling the base image and
setting it up (installing compilers, the test framework, tools, etc).
This can be sped up greatly by building the image and publishing it back
to a (or "the") registry or keeping it local on the machine, but we'd
then have to maintain images which I'm not a fan of.


>
>> I think with the work you've done and plan to do, a step like above to
>> increase our ability to test against many distributions all at once (and
>> cheaply) and also making the test framework more approachable, we could
>> seriously increase our confidence when pulling the trigger on a release
>> or accepting a backport.
>
> +1. It'll take some doing (mostly focused on the coverage of the test
> suite itself), but we can get there.
>
>> I'm also a big fan of backports requiring tests, but am honestly
>> intimidated by the testing framework...
>
> What would make it less intimidating for you? (I agree with you, but
> I'm hoping to get your take without biasing it with my already-strong
> opinions. :D)

Opinions here... so take them with a grain of salt.
* The immediate barrier to entry is doco. From the test landing page,
you are greeted with a list of what's in the test project and links to
the components. Of the links there (our super important one is Perl
Framework), only the flood link leads to a useful getting started guide.
This may be lazy and kinda preachy, but not having good developer info
easily accessible is a Bad Thing(tm) since it's a surefire way to scare
off those potentially interested in participating in a project.
* It's also intimidating when a developer realizes they need to learn a
new skill set to create tests. Writing tests for Perl's testing
framework feels archaic, and I'm not sure is a skill many potential
contributors would possess unless they have developed Perl modules for
distribution. I understand the history of the suite so I _get_ why it's
this way... it's just that it is likely a turn-off. Disclaimer: I'm not
saying Perl has a bad testing framework. I have yet to find a testing
framework I'm a big fan of since they all have their idiosyncrasies. No
holy wars, please :-)
* Another barrier that I think is very much worth pointing out is that
several Perl modules must be installed. I have some history fighting
with Crypt::SSLeay to do what I want because it can be rather finicky.
For example, if your system libssl is something ancient like 0.9.8 but
you compiled httpd with 1.0.2, you'll have a bad time (unless you do
some acrobatics to compile/install the module by hand) trying to speak
modern crypto algorithms.
* The setup activities for the test framework also imply root access.
It's definitely possible to install CPAN modules in a local directory,
but that again also requires acrobatics. Some folks don't have root or
just don't want to install system-wide stuff for just one project. Other
testing frameworks use the same runtime to test as the code does to
execute (JUnit as an example).
* It also feels weird that the test project is separate and that I can't
run `make test'. This is a spinal reflex for sysadmins after compiling
software. Not really an 'intimidation' thing... just... weird.

These are reasons why I love the idea of using Docker for building and
testing. In a cleanroom pseudo-installation, you have complete control
of the environment and can manipulate it/throw it away. The immutability
also ensures you build and test from a known state. It also helps that
with a few changes to a Dockerfile I can switch from building and
testing on Debian to Ubuntu in minutes.

>
> --Jacob

-- 
Daniel Ruggeri


Mime
View raw message