asterixdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From abdullah alamoudi <bamou...@gmail.com>
Subject Re: Undetected failed test cases
Date Sun, 06 Dec 2015 04:51:22 GMT
Just to add some icing on the cake, two of the test cases that are expected
to fail, fail sporadically!!!!!! (aka succeed sometime!).

How?Why?
I have no clue at the moment but what should we do with them?
I have disabled them in the code review I submitted.

I urge all of you to look at the change to see if you can fix any of the
failing test cases or investigate ones with strange behavior.
~Abdullah.

Amoudi, Abdullah.

On Thu, Dec 3, 2015 at 4:36 PM, Mike Carey <dtabass@gmail.com> wrote:

> +1 indeed!
> On Dec 3, 2015 3:45 PM, "Ian Maxon" <imaxon@uci.edu> wrote:
>
> > Definite +1.
> >
> > We also should (separately) start checking the output of the CC/NC
> > logs or somehow otherwise find a way to detect exceptions that are
> > uncaught there. Right now if the exception doesn't come back to the
> > user as an error in issuing a query, we'd have no way to detect it.
> >
> > On Thu, Dec 3, 2015 at 2:43 PM, Till Westmann <tillw@apache.org> wrote:
> > > +1 !
> > >
> > >
> > > On 3 Dec 2015, at 14:38, Chris Hillery wrote:
> > >
> > >> Yes, please propose the change. I've been looking at overhauling the
> > test
> > >> framework as well so I will review.
> > >>
> > >> For Zorba, I implemented a "known failing" mechanism that allowed you
> to
> > >> mark a test that was currently broken (associated with a ticket ID)
> > >> without
> > >> disabling it. The framework would continue to execute it and expect it
> > to
> > >> fail. It would also cause the test run to fail if the test started to
> > >> succeed (ie, the bug was fixed) which ensured that the "known failing"
> > >> mark
> > >> would get removed in a timely fashion. To be clear, this is completely
> > >> distinct from a negative test case - it was a way to not worry about
> > >> forgetting tests that had to be disabled due to known bugs, and to
> > ensure
> > >> that all such known bugs had an associated tracking ticket. It was
> quite
> > >> useful there and I was planning to re-introduce it here.
> > >>
> > >> Ceej
> > >> aka Chris Hillery
> > >>
> > >> On Thu, Dec 3, 2015 at 2:29 PM, abdullah alamoudi <bamousaa@gmail.com
> >
> > >> wrote:
> > >>
> > >>> Hi All,
> > >>> Today, I implemented a fix for a critical issue that we have and
> wanted
> > >>> to
> > >>> add a new kind of test cases where the test case has 3 files:
> > >>>
> > >>> 1. Creating the dataset.
> > >>> 2. Fill it with data that have duplicate keys. This is expected to
> > throw
> > >>> a
> > >>> duplicate key exception.
> > >>> 3. Delete the dataset. This is expected to pass (the bug was here
> where
> > >>> it
> > >>> is not being deleted).
> > >>>
> > >>> With the current way we use the test framework, we are unable to test
> > >>> such
> > >>> case and so I started to improve the test framework starting with
> > >>> actually
> > >>> checking the type of exception thrown and making sure that it matches
> > the
> > >>> expected error.
> > >>>
> > >>> ... and boom. I found that many test cases fail but nobody notices
> > >>> because
> > >>> no one checks the type of exception thrown. Moreover, If a test is
> > >>> expected
> > >>> to fail and it doesn't, the framework doesn't check for that. In
> > >>> addition,
> > >>> sometimes the returned exception is meaningless and that is something
> > we
> > >>> absolutely must avoid.
> > >>>
> > >>> What I propose is that I push to master the improved test framework
> and
> > >>> disable the failing test cases, create JIRA issues for them and
> assign
> > >>> each
> > >>> to someone to look at them.
> > >>>
> > >>> Thoughts?
> > >>>
> > >>> Amoudi, Abdullah.
> > >>>
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message