couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joan Touzet <>
Subject CouchDB CI update for December 2019
Date Tue, 31 Dec 2019 19:21:41 GMT
Hello CouchDB Devs (+ Gavin from ASF Infra - BIG thanks for your work 


We're in the home stretch. There are 2 steps left to switch from Travis 
to Jenkins, and only 2-3 steps after that to re-enable automated binary 

Some additional steps are still required to automate building binaries 
from official, released tarballs, as these require credentials to be 
"baked in" and a special manually-triggered job to be written.

Travis to Jenkins transition

Our new server is live:

Committers can log into this server with their ASF credentials.

There are currently 2 jobs defined:

1. The Pull Requests job builds GitHub pull requests (surprise!)

    As previously explained, these builds run 3 versions of CouchDB and
    are intended to directly replace how we use Travis today.

2. The Full Platform Builds job builds our master and release branches
    on all platforms for which we have CI platforms. This matches the
    current "main" ASF Jenkins build, with builds on everything 
    by the couchdb-pkg repository.

Open Issues

* We're ready to move to Travis as soon as this PR is merged:

   It needs a +1 to land.

   ASF Infra (Gavin) also needs to disable the required Travis status
   check and enable the Jenkins status check as required before this can
   happen, since the PR removes .travis.yml.

* The new Jenkins machine is not currently allowing JNLP connections.
   Once that port is opened, we'll be able to connect our FreeBSD and OSX
   workers. That should light up the "full" builds green as well.

* In January I'll start work on the "special" job that will auto build
   and upload our binaries to the usual places after actual tarball

* We're still waiting on the ARM and PPC workers to be provided. These
   should follow in 1Q2020.

On 2019-10-11 1:41 p.m., Joan Touzet wrote:
> Hello CouchDB Devs (+ Gavin from ASF Infra - BIG thanks for your work to
> date!)
> First, as promised earlier in the week, we have a new flotilla of
> CouchDB CI Docker images waiting in the wings on Docker Hub to replace
> our current Jenkins build agent images:
> couchdbdev/ubuntu-bionic-erlang-
> couchdbdev/ubuntu-xenial-erlang-
> couchdbdev/arm64v8-debian-stretch-erlang-
> couchdbdev/ppc64le-debian-stretch-erlang-
> couchdbdev/arm64v8-debian-buster-erlang-
> couchdbdev/debian-stretch-erlang-
> couchdbdev/debian-buster-erlang-
> couchdbdev/centos-6-erlang-
> couchdbdev/centos-7-erlang-
> couchdbdev/centos-8-erlang-
> The extra platforms are only on debian for the moment because I didn't
> feel like building all the platforms, and because Debian isn't what IBM
> cares most about nowadays. ;) That's also the base image for the Docker
> container, so they came first. (There's a problem with Debian Buster and
> qemu when it comes to ppc64le; see
> for the details.
> This blocked me from updating the Docker container to buster during this
> last revision.)
> These can't be substituted for our current CI images until Fauxton gets
> fixed and rebar.config.script gets updated, see
> for the patch. (This
> is also why is failing
> in Travis right now on the dev build.)
> There's also been progress on Jenkins replacing Travis, and I wanted to
> update everyone on that. There are a few things left before we can get
> completely off of Travis:
> * Install Jenkins on couchdb-vm2 + setup CouchDB-dedicated build agents
>    This had been on hold until the ASF sorted out their approach for
>    multi-master Jenkins machines. I've just sat through the demo for
>    CloudBees Core, which the ASF hopes to bring in to manage lots of
>    Jenkins masters simply. The good news is that each of the Jenkins
>    masters it can manage are just plain ol' vanilla Jenkins - so we
>    should be able to proceed now setting up our own Jenkins instance.
>    I'm very sorry to everyone who's been suffering with subpar (!)
>    Travis CI performance for months now; this was the thing holding us
>    back, and we should be able to move ahead with our own Jenkins master
>    + the IBM donated workers quickly now.
> * Add arm64v8 build agents. ARM has offered to donate to us, through
>    AWS, 2x a1 instances against which we can run our tests. To save on
>    credits, it might be nice to write a first step in the job that uses
>    AWS credentials and the aws-cli to spin those instances up, then in
>    the cleanup step, spin them down, so we don't waste that donation.
>    I've used this in other Jenkins setups, and it's worked extremely
>    well, though it adds about 1 minute of startup delay.
> * Build a new kerl-based Docker image that can be used to emulate our
>    current Travis setup. This shouldn't be too hard to add to the
>    couchdb-ci scripts, but since we want to support Erlang 20, 21 and 22,
>    it'll take my desktop a few hours to crunch out the build and then
>    upload it.
> * Decide, as a group, how we're going to proceed with Jenkins jobs.
>    We can change the PR-triggered job to build a Jenkinsfile replica of
>    our current .travis.yml (ubuntu xenial only, 3 Erlangs), but if we
>    replace the current Jenkinsfile with that, I'm afraid our release
>    process will break again. One of the original motivating factors
>    for moving to Jenkins was to ensure no one broke the release build
>    process. I received support from Paul Davis, Robert Newson and Adam
>    Kocoloski on this approach when I last brought it up.
>    Or, we could merge in the other Erlang tests with the current file,
>    which makes each PR "fatter" (it'll be 11 parallel jobs instead of 9,
>    if you count adding the ppc64le/arm64v8 jobs) at the cost of busier
>    build agents.
>    There's other possibilities too - curious to hear your thoughts.
> Now the golden goose. Why are we bothering with all of this, aside from
> the fact that we have to wait more than an hour, on average, for Travis
> these days? Well, one of the other major reasons is that with a build
> master running on ASF infrastructure (and, thus, under the control of an
> ASF committer), we can safely (and ASF-approvedly!) store credentials
> for services like AWS, IBM Cloud, Docker, and even Bintray in that CI
> infrastructure. With our own master, that means those credentials aren't
> available to the general public, nor any other Apache project (save ASF
> Infra, who's always there to help.)
> That means that we will be positioned to be able to *automatically
> deploy binary convenience packages and Docker images after a release* in
> the very near future.
> NOTE: the apache-couchdb-#.#.#.tar.gz file is *THE* official Apache
> release, and must be cryptographically signed by a PMC member. It cannot
> automatically be pushed in this fashion. We should, however, be able to
> use a CI-built release tarball (from a special Jenkinsfile, presumably)
> for group acceptance testing and manual signing/upload. (The fine print:
> we've actually done this for some of the 2.x release cycle already!)
> Finally, I'm hoping some other community members will take interest and
> help read through, understand, and start taking up some of these tasks.
> I've been doing release management for CouchDB for almost all of 2.x,
> and probably will continue for 3.x, but I'd like to see more of a team
> effort. I have a keen interest in ensuring that work remains a
> *community* effort, especially because I fear the erosion of things like
> cross-Linux-distro support, binary packages vs. Docker, and so on.
> Please, if this interests you at all, speak up - I'll make time to
> mentor you on the current process & build system.
> I look forward to your thoughts and ideas.
> -Joan "really, just a volunteer" Touzet

View raw message