hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Konstantin Boudnik <...@apache.org>
Subject Re: Patch testing
Date Fri, 17 Dec 2010 23:46:10 GMT
I do believe that it makes sense to wait a bit longer before doing
this. If HDFS is added to the test-patch queue right now we get
nothing but dozens of -1'ed patches. Number of failing tests has been
reduced from 16 down to 4. Of those for 1 is a real bug (reviled by
HDFS-903) and other three (all of the same nature) are needed to be
investigated more.

I'm for fixing the trunk first.

On Fri, Dec 17, 2010 at 15:19, Jakob Homan <jghoman@gmail.com> wrote:
> So, with test-patch updated to show the failing tests, saving the
> developers the need to go and verify that the failed tests are all
> known, how do people feel about turning on test-patch again for HDFS
> and mapred?  I think it'll help prevent any more tests from entering
> the "yeah, we know" category.
>
> Thanks,
> jg
>
>
> On Wed, Nov 17, 2010 at 5:08 PM, Jakob Homan <jhoman@yahoo-inc.com> wrote:
>> True, each patch would get a -1 and the failing tests would need to be
>> verified as those known bad (BTW, it would be great if Hudson could list
>> which tests failed in the message it posts to JIRA).  But that's still quite
>> a bit less error-prone work than if the developer runs the tests and
>> test-patch themselves.  Also, with 22 being cut, there are a lot of patches
>> up in the air and several developers are juggling multiple patches.  The
>> more automation we can have, even if it's not perfect, will decrease errors
>> we may make.
>> -jg
>>
>> Nigel Daley wrote:
>>>
>>> On Nov 17, 2010, at 3:11 PM, Jakob Homan wrote:
>>>
>>>>> It's also ready to run on MapReduce and HDFS but we won't turn it on
>>>>> until these projects build and test cleanly.  Looks like both these
projects
>>>>> currently have test failures.
>>>>
>>>> Assuming the projects are compiling and building, is there a reason to
>>>> not turn it on despite the test failures? Hudson is invaluable to developers
>>>> who then don't have to run the tests and test-patch themselves.  We didn't
>>>> turn Hudson off when it was working previously and there were known
>>>> failures.  I think one of the reasons we have more failing tests now is
the
>>>> higher cost of doing Hudson's work (not a great excuse I know).  This is
>>>> particularly true now because several of the failing tests involve tests
>>>> timing out, making the whole testing regime even longer.
>>>
>>> Every single patch would get a -1 and need investigation.  Currently, that
>>> would be about 83 investigations between MR and HDFS issues that are in
>>> patch available state.  Shouldn't we focus on getting these tests fixed or
>>> removed/?  Also, I need to get MAPREDUCE-2172 fixed (applies to HDFS as
>>> well) before I turn this on.
>>>
>>> Cheers,
>>> Nige
>>
>>
>

Mime
View raw message