stdcxx-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Travis Vitek" <>
Subject RE: design of regression tests (was: Re: [jira] Updated: (STDCXX-436) [Linux] MB_LEN_MAX incorrect)
Date Fri, 14 Sep 2007 07:46:28 GMT

Martin Sebor wrote:
>Travis, all,
>I'm trying to decide how we should treat the latest regression test
>for STDCXX-436:
>Up until now our approach to "designing" regression tests has been
>to simply copy the test case from each issue into the test suite
>with only a minimum of changes. The test you've added goes far
>beyond that, which makes it valuable because none of the other
>macros is being tested, but at the same time it marks quite the
>radical departure from the approach taken in all the other tests
>which I hesitate to make without bringing it for discussion first.

Yes. I noticed that none of the other macros were being tested of this
same issue, so I just lumped them all in there to avoid writing a test
that was just a copy/paste of this one.

>On the one hand, when an issue comes in that points out a problem
>with a feature so closely related to one or more others it makes
>perfect sense to make sure that (and add tests for) the other
>related features aren't broken, too. On the other hand, the name
>of each regression test clearly indicates which bug it is designed
>to exercise and when it should fail for some other reason (e.g.,
>a regression in one of the other related macros) it would mislead
>one into thinking that there's a problem with MB_LEN_MAX.


>I suppose that my view on this is in cases like this, when the
>bug report highlights a whole slew of features that aren't being
>tested we should add an ordinary (unit) test for the whole area
>and, if possible, also a regression test just for the bug. My
>rationale for keeping the two separate (even at the cost of
>duplicating some tested functionality) is that the bigger unit
>test is more likely to be enhanced or tweaked in the future and
>thus more likely to be subject to accidentally removing the test
>for the bug (or otherwise "regressing"), while the regression
>test is much more likely to be left  alone and consequently less
>prone to such accidental regressions.
>Opinions? Comments? Thoughts?

I'm onboard with reducing the scope of this test to just verify
MB_LEN_MAX and creating a new test for verifying all of the required

That said, I'm actually hoping to get feedback an the guts of the test
itself. It feels a bit fragile to me because it compiles a separate
executable to emulate getconf [for the necessary constants only], and to
do this I had to hardcode the compiler names and flags into the test.
I'm just hoping that this isn't something that will cause a bunch of
trouble in the future. If it looks fragile to you guys, then maybe it
would be best to just pass for windows builds and invoke the system
getconf as you suggested earlier on other platforms.



View raw message