hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jakob Homan <jho...@yahoo-inc.com>
Subject Re: Patch testing
Date Thu, 18 Nov 2010 01:08:24 GMT
True, each patch would get a -1 and the failing tests would need to be 
verified as those known bad (BTW, it would be great if Hudson could list 
which tests failed in the message it posts to JIRA).  But that's still 
quite a bit less error-prone work than if the developer runs the tests 
and test-patch themselves.  Also, with 22 being cut, there are a lot of 
patches up in the air and several developers are juggling multiple 
patches.  The more automation we can have, even if it's not perfect, 
will decrease errors we may make.

Nigel Daley wrote:
> On Nov 17, 2010, at 3:11 PM, Jakob Homan wrote:
>>> It's also ready to run on MapReduce and HDFS but we won't turn it on until these
projects build and test cleanly.  Looks like both these projects currently have test failures.
>> Assuming the projects are compiling and building, is there a reason to not turn it
on despite the test failures? Hudson is invaluable to developers who then don't have to run
the tests and test-patch themselves.  We didn't turn Hudson off when it was working previously
and there were known failures.  I think one of the reasons we have more failing tests now
is the higher cost of doing Hudson's work (not a great excuse I know).  This is particularly
true now because several of the failing tests involve tests timing out, making the whole testing
regime even longer.
> Every single patch would get a -1 and need investigation.  Currently, that would be about
83 investigations between MR and HDFS issues that are in patch available state.  Shouldn't
we focus on getting these tests fixed or removed/?  Also, I need to get MAPREDUCE-2172 fixed
(applies to HDFS as well) before I turn this on.
> Cheers,
> Nige

View raw message