stdcxx-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Martin Sebor <>
Subject regression test for every issue (was: Re: svn commit: r560283 - /incubator/stdcxx/trunk/include/ansi/cwchar)
Date Fri, 10 Aug 2007 18:12:44 GMT
Farid Zaripov wrote:
>> -----Original Message-----
>> From: Martin Sebor [] 
>> Sent: Thursday, August 09, 2007 12:33 AM
>> To:
>> Subject: Re: svn commit: r560283 - 
>> /incubator/stdcxx/trunk/include/ansi/cwchar
>> Does this fix a bug? (It looks like it does.) If so, it would 
>> be nice to have an issue in Jira with a test case for it :)
>   This patch fixed compilation problem of some test or example on
> Cygwin.
> I will create JIRA issue a bit later. But about test case: do we need
> the test cases (which will be commited in tests/regress directory) for
> compilation problems?

Ideally, I think the answer would be yes.

But I'm beginning to suspect that the goal may not be entirely
practical given our current setup of one test program per issue.
We have over 500 issues in Jira today, which means we've been
creating issues at a rate of nearly 2 a day since the inception
of the project. If we wanted to have a regression test for each
and every one of them I would be concerned about how long it
would take to compile the whole regression test suite, and how
much disk space all the little tests might end up taking up
(we're already running into disk space issues in our nightly

So I think we might need to either give up on the goal of having
a regression test for every issue (which would be pity since it
would, IMO, adversely impact the quality of the product), or
change our process so as to obviate both of these concerns.

One approach to dealing with the problems I'm concerned about
is to run the regression test suite less often than the main
test suite. I'm not too fond of this solution since it decreases
the value of the regression test suite.

Another way of dealing with them is to group tests for multiple
issues into the same program. The risk with taking this approach
is that if one such test fails, some (or none if it fails to
compile) of the others will end up being exercised. That is,
unless we devise a way of dealing with this situation. The
approach to doing so I have seen (e.g., in the PlumHall C++
library validation suite that Rogue Wave licenses and I'd like
to start using for certification builds) is to make the test
harness smart enough to detect such failure(s) and recompile
and/or re-rerun the tests individually if needed.

What do you all think about taking this route?


View raw message