impala-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Daniel Hecht <dhe...@cloudera.com>
Subject Re: "tests for tests" in gerrit-verify-dryrun?
Date Mon, 25 Sep 2017 23:38:59 GMT
+1 to branching based on e.g. files in commit.

On Mon, Sep 25, 2017 at 4:33 PM, Philip Zeyliger <philip@cloudera.com>
wrote:

> I'm not entirely familiar with the current complexity of the Jenkins jobs
> on the ASF infrastructure, but I think it's very sensible to look at the
> files in a commit (e.g., "git diff-tree --no-commit-id --name-only -r
> HEAD") and branch based on patterns in that data.
>
> -- Philip
>
> On Mon, Sep 25, 2017 at 4:27 PM, Taras Bobrovytsky <tarasbob@apache.org>
> wrote:
>
> > I like the idea of having tests for tests as part of GVD. This helps
> ensure
> > that the tests are always functional and are never broken by a commit.
> > Having tests in a functional state is arguably just as important as
> having
> > a functional product.
> >
> > On Mon, Sep 25, 2017 at 4:04 PM, Michael Brown <mikeb@cloudera.com>
> wrote:
> >
> > > Hello,
> > >
> > > I'm about to start working on Impala's random query generator, a
> testing
> > > tool to help find test gaps in Impala's functional tests.
> > >
> > > The random query generator and infra code around it has some functional
> > and
> > > pure unit tests [0] that are not part of GVD, but it wouldn't be hard
> to
> > > fold them into GVD's execution. As part of the upcoming work, I plan to
> > add
> > > even more tests: we need quick unit or functional tests to ensure a
> test
> > > tool is working as expected.
> > >
> > > What are people's thoughts on having these "tests for tests", or infra
> > > tests, be part of GVD?
> > >
> > > Pros:
> > > 1. Helps prevent regression in tools and infra
> > >
> > > 2. Verification procedure is the same as with the rest of Impala: run
> > > gerrit-verify-dryrun
> > >
> > > 3. Automatic Apache RAT verification
> > >
> > > Cons:
> > > 1. Patches to the random query generator tend to be self-contained.
> Ought
> > > we spend more AWS cycles and time building Impala and running these
> tests
> > > in order to run some ostensible (but growing) infra tests?
> > >
> > > 2. Flaky tests and failing builds can block test tool progress
> > >
> > > Other solutions if the cons win:
> > > 1. Separate Jenkins job for these tests (there's a separate job for
> > > submitting and verifying docs, for instance). A con of this is that
> this
> > > can lead to a proliferation of Jenkins jobs and confusion with
> > contributors
> > > on which jobs apply where. Also, if there is ever a patch where Impala
> > > proper and query generator are both updated, which job wins?
> > >
> > > 2. Status quo and set Verified+1/Submitted by hand. This is much easier
> > for
> > > committers than non-committers. I'm OK with status quo, but in the
> past,
> > > there have been requests to improve this situation [1]
> > >
> > > For a data point, I can cd to "tests/comparison/tests", run
> > > "impala-py.test", and 71 tests take about 10 seconds to run.
> > >
> > > Thanks for any feedback.
> > >
> > > [0]
> > > https://git-wip-us.apache.org/repos/asf?p=incubator-impala.
> > > git;a=tree;f=tests/comparison/tests;h=49e3b5d7d9a6f5f716c135bda36292
> > > e05fb0e0d3;hb=HEAD
> > >
> > > [1] https://issues.apache.org/jira/browse/IMPALA-4756
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message