asterixdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Till Westmann" <>
Subject Re: [DISCUSS] Improving reviews
Date Tue, 06 Dec 2016 23:35:18 GMT
On 5 Dec 2016, at 23:13, Chris Hillery wrote:

> It's always been my opinion that code reviews are a very nice-to-have, 
> but
> not more than that. The real value in proposing changes for review 
> comes
> from the automated testing that can be performed at that stage. I 
> think
> we'd be better served overall by shoring up and expanding our 
> automated
> testing rather than spending time discussing and implementing 
> non-technical
> process.

While I’m a big fan of automated testing, I think that all automated
testing does is to catch functional regressions. There is no automated
way to test for design intent or to suggest improvements. So I don’t
think that automated testing has the same intent or could be used to
replace code reviews.

Also, I think that it’s helpful to have a common expectation on how 
members of the community interact. I think that he current way of
interaction around code reviews has just "grown" in a way that doesn’t
work well for everybody. I’ve heard that the latency of addressing
reviews is an issue and so there there was an offline discussion that
I’m bringing to the list here.

Would your suggestion to reduce latency be to stop reviewing code?

> The main benefits of code reviews are catching large-scale design 
> errors
> and spreading code knowledge. You can't really have the former until 
> you
> already have the latter - if only one person really understands an 
> area,
> nobody else will be able to catch design errors in that area. That's
> clearly a risky place to be, but IMHO at least it's not a problem that 
> can
> be solved through rules.

We certainly we have significantly improved the design as the follow
up on some code reviews. Not every review lends itself to that, but
each time it happens, I think that it avoids some development and
maintenance effort.

Also, please note that I'm not suggesting to establish hard rules, but
that I'm trying to get to an agreement on expectation. The proposal
suggested to try to respond "within 72 h" as suggesting "in a
reasonable timeframe" would have been as good as it is today -
everybody has their own idea of what that means.

> It requires a cultural shift from the team to make
> spreading code knowledge an actual priority, rather than someone 
> everyone
> wants but nobody has time or interest to achieve.

I’m not sure what you are suggesting. All currently active committers
were not part of the initial developers of the code base. So every one
of us who has code knowledge acquired it somehow. (And I would guess
that a lot of this happened through the review of code - either by
reading big chunks or by looking at pieces the somebody else put up for

What would "make spreading code knowledge an actual priority" look like?

> If we as a team don't have the drive to do that, then we should accept 
> that
> about ourselves and move on.

Clearly, code knowledge has spread and continues to spread, so it
doesn't seem to be impossible. However, it does require some investment
- as things are not simple.
But I don't understand what it would mean to "move on"? "Moving on"
sounds to me like accepting the fact that everybody on the team
understands a tiny part of the system and ignores the rest of the
I believe that that wouldn't work if we want to build a coherent system
(which is my goal ... not sure if we all agree on this). So "moving on"
seems to be "giving up" on my goal and I'd rather not do that ...


> On Mon, Dec 5, 2016 at 10:49 PM, Till Westmann <> 
> wrote:
>> Hi,
>> today a few of us had a discussion about how we could make the 
>> reviewing
>> process moving along a little smoother. The goal is to increase the
>> likeliness
>> that the reviews and review comments get addressed reasonably 
>> quickly. To
>> do
>> that, the proposal is to
>> a) try to keep ourselves to some time limit up to which a reviewer or
>> author
>>    responds to a review or a comment and to
>> a) regularly report via e-mail about open reviews and how long they 
>> have
>> been
>>    open (Ian already has filed an issue to automate this [1]).
>> Of course one is not always able to spend the time to do a thorough 
>> review
>> [2]
>> / respond fully to comments, but in this case we should aim to let 
>> the
>> other
>> participants know within the time limit that the task is not feasible 
>> so
>> that
>> they adapt their plan accordingly.
>> The first proposal for the time limit would be 72h (which is taken 
>> from the
>> minimal time that a [VOTE] stays open to allow people in all 
>> different
>> locations and timezones to vote).
>> Another goal would be to abandon reviews, if nobody seems to be 
>> working on
>> them
>> for a while (and we’d need to find out what "a while" could be).
>> Thoughts on this?
>> A good idea or too much process?
>> Is the time limit reasonable?
>> Please let us know what you think (ideally more than a +1 or a -1 
>> ...)
>> Cheers,
>> Till
>> [1]
>> [2] 

View raw message