pagespeed-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Otto van der Schaaf <osch...@we-amp.com>
Subject Re: system-test failure under valgrind
Date Fri, 13 Oct 2017 17:55:17 GMT
One thing I want do add with regard to the development flows for others
wondering about this:

These wiki's are useful:
1. https://github.com/pagespeed/mod_pagespeed/wiki/Development
2.
https://github.com/pagespeed/mod_pagespeed/wiki/Standing-up-a-development-server


I just added a note to wiki #1. that one should update the mod_pagespeed
dependency
before starting out with any development (for working with the bleeding
edge).

Otto


On Fri, Oct 13, 2017 at 7:17 PM Otto van der Schaaf <oschaaf@we-amp.com>
wrote:

> On Fri, Oct 13, 2017 at 6:49 PM Joshua Marantz <jmarantz@google.com.invalid>
> wrote:
>
>> On Fri, Oct 13, 2017 at 12:00 PM, Otto van der Schaaf <oschaaf@we-amp.com
>> >
>> wrote:
>> >
>> > So one drawback is that if you run the mod_pagespeed tests from the
>> > ngx_pagespeed checkout, the mod_pagespeed git submodule will be pinned
>> to a
>> > specific
>> > revision. (Which currently is not the head of mod_pagespeed's master
>> > branch.
>> > I'll create a PR to update this)
>> > Did you update the mod_pagespeed testing dependency to the tip of the
>> > branch
>> > before running the tests?
>> >
>>
>> No.  Is there a best-practice flow for developing and iterating on PSOL
>> &/or mod_pagespeed?  I thought the general thinking is that we'd do this
>> as
>> a dependency of ngx_pagespeed to make sure that in the flow we didn't wind
>> up breaking the latter, so that's what I did.
>
>
> If I work on ngx_pagespeed, I follow the flow you just did.
> When I work on mod_pagespeed, I just check it out independently.
>
>
>> If we do a code migration to Apache's source control then it would be
>> great
>> opportunity to unify under a single repo
>
>
> +1! Newer git versions are able to track a submodule's head commit, but we
> can't
> assume a version that supports that. So while mod_pagespeed and
> ngx_pagespeed are
> in different repo's, the way things are right now is the best we can do I
> think.
>
>
>>
>> > I think it was a flake, the tests certainly are able to pass -- for
>> > example:
>> > http://ci.onpagespeed.com/stream/mod_pagespeed/6203400cd7df4
>> > 578b786e73282663024d4abfc65/ubuntu-1404-x64-checkin-test
>> > (Test run for
>> > https://github.com/pagespeed/mod_pagespeed/commits/6203400cd
>> > 7df4578b786e73282663024d4abfc65
>> > )
>> > I do not remember seeing the test you posted fail before though.
>> >
>>
>> One thing I want to remind everyone of listening is: if you are writing a
>> positive system-test for rewriting a resource, the best practice is to use
>> fetch_until rather just WGET and look at output.  This allows PageSpeed to
>> finish the rewrite even if the machine is slow.  Valgrind tests often
>> tease
>> this out.
>>
>> I just ran into this again on a second attempt on
>> css_minify_calc_function_value_zero.sh,
>> which I think was added recently.  I have a fix for that I'm testing now.
>>
>>
> This should already be fixed on the head of mod_pagespeed (I'll create a PR
> to update ngx_pagespeed to update the mod_pagespeed dependency after
> sending this out).
>
>
>> > https://github.com/pagespeed/mod_pagespeed/blob/master/devel/checkin
>>
>>
>> Yes that was what I was looking for!  Although it probably needs work to
>> help deal more sanely with flaky tests.  E.g. I'd like to be able to have
>> it go all the way through and tell me all the tests that failed, then give
>> me the opportunity to re-run just the failing ones while iterating.
>> Ideally this is a problem someone has already solved with some open-source
>> test script infrastructure we could just use.
>>
>
> +1 -- is there an open issue for this?
>
> I'm not sure if this is the way to go, but some time ago I did some
> experimentation with
> porting a couple of system tests to a well-known python testing framework:
> https://github.com/We-Amp/psol_pytest
>
> I think the challenge here is not in the porting, but in the
> coordination/reviewing/double
> checking -- the amount of actual work to port the tests seems bounded.
> This would open up opportunities, we can run tests in parallel (more
> easily) and
> repeat tests, and generally have more tools at our disposal to write tests
> etc.
> It's also x-platform, though bash is becoming increasingly more available
> on windows 10
> these days.
>
> Otto
>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message