hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Douglas <cdoug...@apache.org>
Subject Re: Patch review process
Date Wed, 11 Feb 2015 21:11:20 GMT
+1; ChrisN's formulation is exactly right.

The patch manager can't force (or shame) anyone into caring about your
issue. One of the benefits of RTC is that parts of the code with a
single maintainer are exposed. If you can't find collaborators, either
(a) this isn't the right community for that module or (b) the project
needs to acknowledge and address the "bus factor" [1] for that code.
By observing and directing review, the patch manager accumulates
context most contributors don't have.

Does anyone want to work with INFRA to test Crucible? It looks like
Ambari started exploring it last year [2]. From David's response, it
sounds like they'd be willing to work with a project to experiment,
but most requests have been for Gerrit. -C

[1] http://en.wikipedia.org/wiki/Bus_factor
[2] https://issues.apache.org/jira/browse/INFRA-8430

On Tue, Feb 10, 2015 at 1:19 PM, Chris Nauroth <cnauroth@hortonworks.com> wrote:
> I don¹t anticipate a patch manager introducing a new bottleneck.
> As originally described by Chris D, the role of the patch manager is not
> to review and commit all patches in an assigned area.  Instead, the
> responsibility is queue management: following up on dormant jiras to make
> sure progress is made.  This might involve the patch manager doing the
> review and commit, but it also might mean contacting someone else for
> review, closing it if it¹s a duplicate, or making a won¹t-fix decision.
> It¹s the kind of activity that Allen and Steve have done a lot lately.
> I see the patch manager role as addressing the fact that the community
> itself has grown large and complex.  As others have mentioned, it¹s not
> always clear to a new contributor who to ask for a code review.  A patch
> manager would be familiar enough with the community to help steer their
> patches in the right direction.
> I suppose we don¹t need to formalize this too much.  If anyone feels
> capable of doing this kind of queue management in a certain area of
> expertise, please dive in.  Congratulations, you are now a patch manager!
> I¹m sure everyone would appreciate it.
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
> On 2/10/15, 9:31 AM, "Tsuyoshi Ozawa" <ozawa@apache.org> wrote:
>>> How could we speed up?
>>+1 for trying Crucible. We should try whether it's integrated well and
>>it can solve the problem of "splitting discussion". If Crucible solves
>>it, it would be great.
>>About the patch manager, I concern that it can delay reviews if the
>>patch size is too small and the amount of work of patch manager get
>>more and more.
>>About "long-lived old patches", how about making them open
>>automatically when specific periods passes? It can also be a ping to
>>ML and save the time to check old patches.
>>> - Some talk about how to improve precommit. Right now it takes hours to
>>the unit tests, which slows down patch iterations. One solution is running
>>tests in parallel (and even distributed). Previous distributed experiments
>>have done a full unit test run in a couple minutes, but it'd be a fair
>>amount of work to actually make this production ready.
>>> - Also mention of putting in place more linting and static analysis.
>>Automating this will save reviewer time.
>>I'm very interested in working this. If the distributed tests
>>environment can be prepared, it can accelerate the development of
>>> To date I've been the sole committer running the tests, reviewing the
>>>code and with a vague idea of what's being going on. That's because (a)
>>>I care about object stores after my experience with getting swift://
>>>in, and (b) I'm not recommending that anyone use it in production until
>>>its been field-tested more.
>>I've heard that swift community started to maintain code.
>>If we make the components production ready, we need to setup S3 or
>>Swift stubs in test environment. Is this feasible?
>>BTW, Agile board looks helpful for us to know the status of our
>>projects at a glance. Mesos is using it.
>>- Tsuyoshi
>>On Tue, Feb 10, 2015 at 7:10 PM, Steve Loughran <stevel@hortonworks.com>
>>> On 9 February 2015 at 21:18:52, Colin P. McCabe
>>>(cmccabe@apache.org<mailto:cmccabe@apache.org>) wrote:
>>> What happened with the Crucible experiment? Did we get a chance to
>>> try that out? That would be a great way to speed up patch reviews,
>>> and one that is well-integrated with JIRA.
>>> I am -1 on Gerrit unless we can find a way to mirror the comments to
>>> JIRA. I think splitting the discussion is a far worse thing that
>>> letting a few minor patches languish for a while (even assuming that
>>> gerrit would solve this, which seems unclear to me). The health of
>>> the community is most important.
>>> I think it is normal and healthy to post on hdfs-dev, email
>>> developers, or hold a meeting to try to promote your patch and/or
>>> idea. Some of the discussion here seems to be assuming that Hadoop is
>>> a machine for turning patch available JIRAs into commits. It's not.
>>> It's a community, and sometimes it is necessary to email people or
>>> talk to them to get them to help with your JIRA.
>>> I know your heart is in the right place, but the JIRA examples given
>>> here are not that persuasive. Both of them are things that we would
>>> not encounter on a real cluster (nobody uses Hadoop with ipv6, nobody
>>> uses Hadoop without setting up DNS).
>>> Got some bad news there. The real world is messy, and the way Hadoop
>>>tends to fail right now leaves java stack traces that tend to leave
>>>people assuming its Hadoop side.
>>> Messy networks are extra commonplace amongst people learning to use
>>>Hadoop themselves, future community members, and when you are bringing
>>>up VMs.
>>> In production, well, talk to your colleagues in support and say "how
>>>often do you field network-related problems?", followed by "do you think
>>>Hadoop could do more to help here?"
>>> But, if we find a specific set
>>> of issues that the community has ignored (such as good error messages
>>> in bad networking setups, configuration issues, etc.), then we could
>>> create an umbrella JIRA and make a sustained effort to get it done.
>>> Seems like a good strategy.
>>>  I've just created https://issues.apache.org/jira/browse/HADOOP-11571,
>>>"get S3a production ready". It shipped in Hadoop 2.6; now it's out in
>>>the wild the bug reports are starting to come back in. Mostly scale
>>>related; some failure handling, some improvements to work behind proxies
>>>and with non-AWS endpoints.
>>>   1.  To date all the s3a code has come from none committers; the
>>>original codebase
>>>   2.  Most of the ongoing dev from is Thomas Demoor at amplidata,
>>>   3.  There's been some support via AWS (HADOOP-10714),
>>>   4.  There's been a couple of patches from Ted Yu after hbase backups
>>>keeled over from too many threads
>>> One thing that is notable about the s3a (or any of the object store
>>>filesystems) is that Jenkins does not run the tests. Anyone proposing to
>>>+1 a patch based on a Jenkins run (see HADOOP-11488) is going to get a
>>>-1 from me; it takes 30-60 minutes for a test run. You get a bill of
>>>about 50c/month for participating this project
>>> To date I've been the sole committer running the tests, reviewing the
>>>code and with a vague idea of what's being going on. That's because (a)
>>>I care about object stores after my experience with getting swift://
>>>in, and (b) I'm not recommending that anyone use it in production until
>>>its been field-tested more.
>>> Who is going to assist me review and test these patches?
>>> Perhaps we could also do things like batching findbugs fixes into
>>> fewer JIRAs, as has been suggested before.
>>> A detail. Findbugs is not the problem

View raw message