aries-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Nuttall <>
Subject Re: building Apache Aries trunk from the top level pom
Date Wed, 15 Sep 2010 16:20:01 GMT
I too find the application itests particularly flaky on a local mvn command
line. I spent most of today trying to debug the sometimes-failing tests: I
couldn't get any to fail under a debugger, and couldn't work out from the
trace why any had failed otherwise :(

Emily, Chris and I are going to be making changes to the application
provisioning and runtime areas for a while yet. I'm sorry if we've
introduced further problems in this area. I do think it's timing related
based on today's investigations.


On 15 September 2010 17:03, Valentin Mahrwald <>wrote:

> I just checked and had a 50% success rate :)
> I have found some of the new tests in the application itest project are
> flaky (maybe the timeouts are not big enough). But the rest seems to work
> for me.
> Valentin
> On 15 Sep 2010, at 10:28, Joe Bohn wrote:
> >
> > I seem to be having lots of problems building Apache Aries trunk from the
> top level pom because of test errors.  And, the more tests we add the worse
> the process becomes.  For me it is virtually impossible to build.  Once in a
> while I'll get lucky and things will actually work. However, most of the
> time it seems there are test failures somewhere along the way.  The failure
> is often a timeout waiting for a service. However, there are a large number
> of other (strange) failures such as InvocationTargetExceptions, invalid
> state, NPEs, etc...  that are becoming more common.
> >
> > When attempting to run a build from the top level a test that passes on
> one attempt will fail on the next and the one that failed on the last run
> will pass on the next (if the build even gets that far).  All in all, it is
> pretty much impossible to build from the top level.
> >
> > The only success that I have in building all of Apache Aries is to build
> each module individually in the order specified in the top level pom (which
> I think is now correct).  As I hit failures I rebuild just that module until
> successful and then I move on to the next module.
> >
> > So this raises 2 questions:
> > 1) Am I the only one seeing these types of problems?  If it is just me
> then I guess I just need to figure out what is wrong with my environment.
> >
> > 2) If it is more wide spread then it seems to me that we might have
> issues that we need to address.  Certainly we are dealing with a dynamic
> system with loose coupling and there are very likely timing scenarios that
> will arise occasionally.  However, the frequency and variety of failures I'm
> seeing makes me wonder if we have larger timing or synchronization issues
> that have not yet been addressed.  Do you agree?  If so, then we need to
> come up with some way to isolate and resolve these issues.
> >
> > Joe

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message