hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Konstantin Boudnik <...@apache.org>
Subject Re: Patch testing
Date Sat, 18 Dec 2010 03:01:05 GMT
One more issue needs to be addressed before test-patch is turned on HDFS is
  https://issues.apache.org/jira/browse/HDFS-1511
--
  Take care,
Konstantin (Cos) Boudnik



On Fri, Dec 17, 2010 at 16:17, Konstantin Boudnik <cos@apache.org> wrote:
> Considering that because of these 4 faulty cases every patch will be
> -1'ed a patch author will still have to look at it and make a comment
> why this particular -1 isn't valid. Lesser work, perhaps, but messier
> IMO. I'm not blocking it - I just feel like there's a better way.
>
> --
>   Take care,
> Konstantin (Cos) Boudnik
>
>
>
> On Fri, Dec 17, 2010 at 15:55, Jakob Homan <jghoman@gmail.com> wrote:
>>> If HDFS is added to the test-patch queue right now we get
>>> nothing but dozens of -1'ed patches.
>> There aren't dozens of patches being submitted currently.  The -1
>> isn't the important thing, it's the grunt work of actually running
>> (and waiting) for the tests, test-patch, etc. that Hudson does so that
>> the developer doesn't have to.
>>
>> On Fri, Dec 17, 2010 at 3:48 PM, Dhruba Borthakur <dhruba@gmail.com> wrote:
>>> +1, thanks for doing this.
>>>
>>> On Fri, Dec 17, 2010 at 3:19 PM, Jakob Homan <jghoman@gmail.com> wrote:
>>>
>>>> So, with test-patch updated to show the failing tests, saving the
>>>> developers the need to go and verify that the failed tests are all
>>>> known, how do people feel about turning on test-patch again for HDFS
>>>> and mapred?  I think it'll help prevent any more tests from entering
>>>> the "yeah, we know" category.
>>>>
>>>> Thanks,
>>>> jg
>>>>
>>>>
>>>> On Wed, Nov 17, 2010 at 5:08 PM, Jakob Homan <jhoman@yahoo-inc.com>
wrote:
>>>> > True, each patch would get a -1 and the failing tests would need to
be
>>>> > verified as those known bad (BTW, it would be great if Hudson could
list
>>>> > which tests failed in the message it posts to JIRA).  But that's still
>>>> quite
>>>> > a bit less error-prone work than if the developer runs the tests and
>>>> > test-patch themselves.  Also, with 22 being cut, there are a lot of
>>>> patches
>>>> > up in the air and several developers are juggling multiple patches.
 The
>>>> > more automation we can have, even if it's not perfect, will decrease
>>>> errors
>>>> > we may make.
>>>> > -jg
>>>> >
>>>> > Nigel Daley wrote:
>>>> >>
>>>> >> On Nov 17, 2010, at 3:11 PM, Jakob Homan wrote:
>>>> >>
>>>> >>>> It's also ready to run on MapReduce and HDFS but we won't
turn it on
>>>> >>>> until these projects build and test cleanly.  Looks like
both these
>>>> projects
>>>> >>>> currently have test failures.
>>>> >>>
>>>> >>> Assuming the projects are compiling and building, is there a
reason to
>>>> >>> not turn it on despite the test failures? Hudson is invaluable
to
>>>> developers
>>>> >>> who then don't have to run the tests and test-patch themselves.
 We
>>>> didn't
>>>> >>> turn Hudson off when it was working previously and there were
known
>>>> >>> failures.  I think one of the reasons we have more failing
tests now is
>>>> the
>>>> >>> higher cost of doing Hudson's work (not a great excuse I know).
 This
>>>> is
>>>> >>> particularly true now because several of the failing tests involve
>>>> tests
>>>> >>> timing out, making the whole testing regime even longer.
>>>> >>
>>>> >> Every single patch would get a -1 and need investigation.  Currently,
>>>> that
>>>> >> would be about 83 investigations between MR and HDFS issues that
are in
>>>> >> patch available state.  Shouldn't we focus on getting these tests
fixed
>>>> or
>>>> >> removed/?  Also, I need to get MAPREDUCE-2172 fixed (applies to
HDFS as
>>>> >> well) before I turn this on.
>>>> >>
>>>> >> Cheers,
>>>> >> Nige
>>>> >
>>>> >
>>>>
>>>
>>>
>>>
>>> --
>>> Connect to me at http://www.facebook.com/dhruba
>>>
>>
>

Mime
View raw message