ant-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: using multiple properties in the 'if' and 'unless' conditions
Date Mon, 26 Jun 2006 12:17:48 GMT
Riedel Thomas (KSFD 121) wrote:
> Yes I agree the kind of our Ant-usage might be a bit beyond horizon. We
> are doing continious integration for a 5 Mio LOC project, generic
> automated junit testing, automatic deploying into a production like
> server pool, online testing, web-testing, automated metrics generation
> for the architecture group, automated baseline quality level assertion,
> messaging, etc.etc and all this with ant, quite error prove ;) The main
> problem is that not every error or event might break the build - instead
> we have different level of build quality. So the flow of the different
> steps (targets) is very dynamic. We are in copious use of <if>,
> <trycatch> and all the other nice contrib tasks that make ant more a
> script language than an descriptive build tool. 
> 
> The other possiblity for a bit more dynamic flow of control would be a
> dynamic condition, I guess. In the sense that the evaluation of the
> condition should not be done once when the task is executed but every
> time somebody references that condition. But thats just a guess.
> 

Ok. I think you've just moved beyond ant's terms-of-use :)

I'm currently busy trying to get junit reporting more fully integrated 
into smartfrog, because we do  a fair bit of distribution stuff 
ourselves, and I'm trying to do the following

-prepare the app server (jboss) with automatic copy-in of files needed 
to get hibernate+mysql to work
-start the app server
-start mysql via net start if isnt already live
-deploy by copy of the war
-spin until the happy.jsp page is there or we time out
-run httpunit tests against the deployed app
-run cactus against the deployed app
-publish the results of the test runs as HTTP files

The way we do deployment, I can host the test runs on different machines 
from the server, which tests the networking side of things better. And, 
as we can deploy to machines other than the test box, developers can 
test on machines other than their own. We haven't looked in to server 
pooling for this little project, though its been done for other things 
in the past. There are some people who want this, the hard thing being 
resource allocation of clusters.

I started off using the <junit> XML reports for reporting, but have 
moved on since then.
  1. I dont buffer into a DOM; I stream to the filesystem. that way a 
JVM crash will still preserve a trace.
  2. we are capturing the commons-logging log at the different log 
levels, rather than just stdout; the different log level data is 
included in the XML report
3. I'm pumping out XHTML with a .CSS file that gets extracted from the 
classpath and pasted inline. This gives you readable HTML without a 
post-processing stage.

If you are going to be at apachecon this week, I will be demoing it on 
friday, "Beyond Unit Testing". Otherwise, try sneaking in to Google's 
Automated Testing conference in London in September.

I'm also debating a little trip to Geneva, to visit the Cern people who 
have some serious test problems. There's some PhD student's I'm working 
with on deployment there, and it would be interesting to get a little 
meeting together of people with interesting test problems in the .ch 
area. Do you qualify?

see also gridunit, http://gridunit.sf.net ; Alexandre is at cern right 
now too; we are collaborating on a common wire representation of 
seralized test results. GridUnit is good at presenting the  results from 
tests across hundreds of machines.

Anyhow, the point of this is that while ant is a great build tool, 
distributed deployment and testing is beyond its direct scope, 
particularly once you add in more complex failure modes, especially 
partial failures of tests on some machines. You can see these 
limitations with the condition stuff. We should see if we can come up 
with a better way of handling conditions, but it is your failure modes 
that are the problem; Ant has a very simple world view "builds pass or 
they fail", with the main fault handling being "optionally ignore 
failures".

SmartFrog itself doesnt have a much better world view, but you can 
easily define new containers with new policies, like the "kill my child 
components if they take too long to terminate" container, or a 
"terminate and redeploy my child components every 24 hours".  Ant doesnt 
have <targets> that do that kind of thing, and not enough in the 
execution model to make it easy to add.

-steve



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
For additional commands, e-mail: dev-help@ant.apache.org


Mime
View raw message