pagespeed-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Otto van der Schaaf <>
Subject Re: system-test failure under valgrind
Date Fri, 13 Oct 2017 17:17:37 GMT
On Fri, Oct 13, 2017 at 6:49 PM Joshua Marantz <>

> On Fri, Oct 13, 2017 at 12:00 PM, Otto van der Schaaf <>
> wrote:
> >
> > So one drawback is that if you run the mod_pagespeed tests from the
> > ngx_pagespeed checkout, the mod_pagespeed git submodule will be pinned
> to a
> > specific
> > revision. (Which currently is not the head of mod_pagespeed's master
> > branch.
> > I'll create a PR to update this)
> > Did you update the mod_pagespeed testing dependency to the tip of the
> > branch
> > before running the tests?
> >
> No.  Is there a best-practice flow for developing and iterating on PSOL
> &/or mod_pagespeed?  I thought the general thinking is that we'd do this as
> a dependency of ngx_pagespeed to make sure that in the flow we didn't wind
> up breaking the latter, so that's what I did.

If I work on ngx_pagespeed, I follow the flow you just did.
When I work on mod_pagespeed, I just check it out independently.

> If we do a code migration to Apache's source control then it would be great
> opportunity to unify under a single repo

+1! Newer git versions are able to track a submodule's head commit, but we
assume a version that supports that. So while mod_pagespeed and
ngx_pagespeed are
in different repo's, the way things are right now is the best we can do I

> > I think it was a flake, the tests certainly are able to pass -- for
> > example:
> >
> > 578b786e73282663024d4abfc65/ubuntu-1404-x64-checkin-test
> > (Test run for
> >
> > 7df4578b786e73282663024d4abfc65
> > )
> > I do not remember seeing the test you posted fail before though.
> >
> One thing I want to remind everyone of listening is: if you are writing a
> positive system-test for rewriting a resource, the best practice is to use
> fetch_until rather just WGET and look at output.  This allows PageSpeed to
> finish the rewrite even if the machine is slow.  Valgrind tests often tease
> this out.
> I just ran into this again on a second attempt on
> which I think was added recently.  I have a fix for that I'm testing now.
This should already be fixed on the head of mod_pagespeed (I'll create a PR
to update ngx_pagespeed to update the mod_pagespeed dependency after
sending this out).

> >
> Yes that was what I was looking for!  Although it probably needs work to
> help deal more sanely with flaky tests.  E.g. I'd like to be able to have
> it go all the way through and tell me all the tests that failed, then give
> me the opportunity to re-run just the failing ones while iterating.
> Ideally this is a problem someone has already solved with some open-source
> test script infrastructure we could just use.

+1 -- is there an open issue for this?

I'm not sure if this is the way to go, but some time ago I did some
experimentation with
porting a couple of system tests to a well-known python testing framework:

I think the challenge here is not in the porting, but in the
checking -- the amount of actual work to port the tests seems bounded.
This would open up opportunities, we can run tests in parallel (more
easily) and
repeat tests, and generally have more tools at our disposal to write tests
It's also x-platform, though bash is becoming increasingly more available
on windows 10
these days.


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message