stdcxx-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Martin Sebor <>
Subject Re: [RFC] build result views
Date Sun, 25 Nov 2007 20:30:16 GMT
Mark Brown wrote:
> Have you guys seen the Boost regression test page?

I saw at it a long time ago but not recently. It looks like they
have made some improvements since then.

Interestingly, the set of platforms indicates they're testing with
STLport but not with our implementation. Farid, do you have any idea
what it would take to get them to start testing with stdcxx (and
publish test results with it)? It looks like the sets of results
they publish now are all provided by volunteers. Could we offer to
do the testing with stdcxx for them?

> I think there are some good ideas there. I like the links to
> the test sources and to the compiler errors for the ones that
> error out.

The links would definitely be useful. We need to come up with a way
to annotate the command lines in build logs so we can either easily
extract them or insert HTML anchors. Any ideas anyone?

I also like the annotations explaining the test failures -- for
example here: We've been talking about
dealing with expected failures but this is something we haven't
considered doing. Seems like an interesting alternative to the
expected failure extension to the test driver that we discussed.


> --Mark
> Martin Sebor wrote:
>> Farid Zaripov wrote:
>>>> -----Original Message-----
>>>> From: Martin Sebor [] On Behalf Of Martin Sebor
>>>> Sent: Tuesday, November 13, 2007 6:18 AM
>>>> To:
>>>> Subject: [RFC] build result views
>>>> I'm looking for feedback on the two sets of nightly result pages we 
>>>> currently publish:
>>>> and
>>>> Specifically, I'm wondering what would people think about replacing 
>>>> the first page with the second, or something like it. Do you find 
>>>> yourself using the first page more or do you prefer the second one, 
>>>> and why?
>>>   The first page is useful to see the overall status of the library.
>>> I'm using it just for sure that Windows builds are green :)
>> :) That's how I've been using it too. The danger of colorizing
>> entire builds the way we do on that page is in making it easy
>> to miss important failures occurring only in a small number of
>> tests. I.e., you might see green even when a critical piece of
>> the library is broken (recall the recent binary incompatibility
>> with XLC++ exception).
>> Anyway, sounds like adding colorization either to builds.html
>> or to the Logs and Columns table on each Multi-platform Test
>> Result View is one enhancement you'd like to see, correct?
>> What about data? Do you use any of the data from the colorized
>> page? E.g., the number of components (examples, tests, locales)
>> vs the number of those that failed? I personally don't think
>> they are terribly useful but adding them shouldn't be too hard.
>> I do plan on adding the duration column (with a lot more detail,
>> such as how long the library took to build in user, system, and
>> wall clock time, and the same for all examples, the test driver,
>> tests, and locales). Anything else?
>>>   Of cource the second page is more convinient to see what
>>> examples or tests are failed and how these fails dependent
>>> from the build type.
>>>   I would like to have the possibility to merge the results of the
>>> multiple platforms into the one page, i.e.:
>>> - all windows builds (MSVC and ICC);
>>> - all MSVC builds;
>>> - all ICC/windows builds;
>>> - all ICC builds (windows and linux);
>>> - all MSVC 8.0 build + all gcc 4.2.0 builds;
>> This is something I'd like to be able to do as well, and I have
>> in a small number of cases. It can easily be done by changing
>> the genxviews script to generate whatever combination of builds
>> you need (we should move the data out of the script to make it
>> possible to do this without modifying the script itself).
>> After making the changes you just run your custom genxviews like
>> this:
>>   $ genxviews > $HOME/public_html/stdcxx/results/builds.html
>> Of course, the ultimate implementation would let you do it on
>> demand (e.g., as you suggest below).
>> One thing to keep in mind is that the more builds you squeeze on
>> a page the harder it becomes to see them all at the same time. At
>> a certain point it starts to defeat the purpose of the page because
>> you end up scrolling it left and right to see the results for all
>> the platforms.
>>> ...
>>>   It might be realized as the current page
>>> (
>>> but with the checkboxes on the each line of the table and button
>>> "Next" somewhere on the page.
>> That would be pretty cool. The only thing is that generating the
>> pages takes quite a bit of time (you can see how long in the Time
>> column), so you might have to wait a few minutes to get the results
>> for a custom selection. We could probably optimize it to just a few
>> seconds by pre-processing the individual logs so as not to make the
>> script work so hard.
>> Martin
>>>> If neither, how do you analyze build results and why do you find 
>>>> your system preferable to what we have? What does your ideal result 
>>>> page look like? What data should it show and how should it be 
>>>> presented?
>>> Farid.

View raw message