incubator-flex-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeff Conrad <>
Subject [RT] Ideas on getting the entire test suite for the sdk to run in 10 minutes or less
Date Tue, 14 Aug 2012 16:01:21 GMT

I'd like to help the project get to a point to where we can run the entire
test suite for the sdk in 10 minutes or less.  I think that's a worthy
goal, and I'm willing to help make that a reality.

If we get the testing time down to being that fast, that will mean we can
review a lot more contributions, and it could also mean that potential
contributors can submit patches that they know don't fail any tests before
submitting the patch.  It also means that if one of those tests fail, the
person writing the code still has the context of what they did fresh in
their mind and can fix it quickly.

For reference, I ran the entire Mustella suite last night, and it took 4
hours, 3 minutes, and 50 seconds to run ./ -createImages -all
last night on a quad-core machine running Windows 7 using the Git Bash
(yay! no cygwin!).  To make it work with the Git Bash, I changed the shell
variable in mustella/build.xml to just sh.exe.  As a plus,
somewhat intelligently parallel-ized the compilation of all the test swfs
so it was compiling 4 swfs at a time, and then the ant script ran all the
test swfs one at a time.  I know mustella is more a functional /
integration test suite, so by definition it's going to run slower than a
suite of unit tests.

I also know that at least a few people on this list want to refactor the
sdk so it's unit-testable.  I definitely support this and would like to
help in any way I can on that front.

I want the entire suite: unit, integration, and functional to be able to
run in 10 minutes or less.  I have two ideas as to how we can make this
happen, and I'm open to more.

One idea would be to intelligently look at the files that a given patch /
changeset affects and only build and run the tests that test that

The other idea that comes to mind is spin up a group of virtual servers in
an IaaS cloud and split up the workload.  If we can evenly split this up,
and my original number is right, it means 24+ servers.  There are easy ways
to get the files across that many servers quickly, so the file transfer
part wouldn't be that hard.

What do you guys think?  Where would you start?


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message