asterixdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Till Westmann" <ti...@apache.org>
Subject Re: [DISCUSS] Improving reviews
Date Tue, 06 Dec 2016 23:41:01 GMT
On 6 Dec 2016, at 10:42, abdullah alamoudi wrote:

> I would like to stress one point though: Having no comments after a 
> timeout (72hr) means that the reviewer couldn't find time to do the 
> review. In which case, the change owner needs to be extra careful as 
> the whole blame will be on them if their change breaks something. If 
> you're not totally sure about your change and are not testing every 
> little possible case, you can still insist on a review.
>
> One issue that comes to mind is: what if someone's changes 
> continuously break things?
> Of course we can't revoke commit privileges (Can we?) but what we can 
> do is:
> 1. pay more attention to changes submitted by people who's changes 
> break more often.
> 2. increase the timeout period for them "temporarily" 72->96->120..... 
> If you don't like this, don't make bad changes.
> 3. MOST IMPORTANTLY,  all of us should strive to have better test 
> coverage to automatically detect breaks caused by wild changes.

Wow, that went quickly from suggesting expectations for a review 
timeframe to revoking commit privileges …
Doesn’t seem to be the "Apache Way" …

>> On Dec 6, 2016, at 10:33 AM, abdullah alamoudi <bamousaa@gmail.com> 
>> wrote:
>>
>> Ceej,
>> You spoke my mind and I agree with every word. I believe the way to 
>> go is smoother code review process (maybe through time limit) and 
>> more focus on automated testing. One thing we could do is:
>> If the 72 hours period pass with no comments, we look at the code 
>> coverage and how much it improved and based on that, either allow the 
>> change in or deny it. This will push change submitters to add tests 
>> to increase coverage even before the 72h limit passes. This doesn't 
>> remove the responsibility of doing the review. The review is still to 
>> be done. However as Ceej said: One of the goals of doing the reviews 
>> is to catch large scale design errors. Those I think still need 
>> humans to look at but they can be caught fairly quickly with minimal 
>> effort.
>>
>> As for spreading the knowledge, will leave that to a different 
>> discussion. I will end this with some tweets about code simplicity 
>> and changes taken from Max Kanat-Alexander, author of Code 
>> Simplicity:
>>
>> 1. You don't have to be perfect. If you make a bad change, just fix 
>> it. (mistakes will happen with or without reviews)
>> 2. If somebody is improving code quality, don't shoot them down 
>> because their improvement isn't perfect. (to reviewers)
>> 3. The point is to have a maintainable system, not to show how clever 
>> you are. (to submitters)
>> 4. Code quality isn't something you fix once and it stays good 
>> forever. It's something you learn to do and continue doing.
>> 5. Engineers don't beg, "Please let me build a bridge that will stay 
>> up." You shouldn't need permission to write good software.
>> 6. Anybody who tells you that you can fix years of bad code in months 
>> or days is a liar.
>> 7. Even huge codebases can be refactored incrementally.
>> 8. Sometimes a refactoring will break something. Often, this proves 
>> that the code was too fragile and so needed the refactoring!
>> 9. If your code "works," but it's an unstable pile of complexity, do 
>> you feel good about it?
>> 10. Refactoring is often easier and more rewarding than you expect.
>> 11. Don't try to write "perfect" code, just write *better* code until 
>> you have *good* code.
>> 12. Don't worry about how to do the perfect refactoring. Just keep 
>> improving the code in small steps.
>>
>> I am glad we're talking about this. Cheers,
>> ~Abdullah.
>>
>>> On Dec 5, 2016, at 11:13 PM, Chris Hillery <chillery@hillery.land> 
>>> wrote:
>>>
>>> It's always been my opinion that code reviews are a very 
>>> nice-to-have, but
>>> not more than that. The real value in proposing changes for review 
>>> comes
>>> from the automated testing that can be performed at that stage. I 
>>> think
>>> we'd be better served overall by shoring up and expanding our 
>>> automated
>>> testing rather than spending time discussing and implementing 
>>> non-technical
>>> process.
>>>
>>> The main benefits of code reviews are catching large-scale design 
>>> errors
>>> and spreading code knowledge. You can't really have the former until 
>>> you
>>> already have the latter - if only one person really understands an 
>>> area,
>>> nobody else will be able to catch design errors in that area. That's
>>> clearly a risky place to be, but IMHO at least it's not a problem 
>>> that can
>>> be solved through rules. It requires a cultural shift from the team 
>>> to make
>>> spreading code knowledge an actual priority, rather than someone 
>>> everyone
>>> wants but nobody has time or interest to achieve.
>>>
>>> If we as a team don't have the drive to do that, then we should 
>>> accept that
>>> about ourselves and move on. You'll always do best spending time on
>>> enhancing the strengths of a team, not fighting against the things 
>>> they
>>> don't excel at. I'm also not trying to make any kind of value 
>>> judgment here
>>> - software development is always about tradeoffs and compromise, 
>>> risk
>>> versus goals. Any time taken to shift focus towards spreading code
>>> knowledge will by necessity pull from other parts of the development
>>> process, and the upshot may well not be an overall improvement in
>>> functionality or quality.
>>>
>>> Ceej
>>> aka Chris Hillery
>>>
>>> On Mon, Dec 5, 2016 at 10:49 PM, Till Westmann <tillw@apache.org> 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> today a few of us had a discussion about how we could make the 
>>>> reviewing
>>>> process moving along a little smoother. The goal is to increase the
>>>> likeliness
>>>> that the reviews and review comments get addressed reasonably 
>>>> quickly. To
>>>> do
>>>> that, the proposal is to
>>>> a) try to keep ourselves to some time limit up to which a reviewer 
>>>> or
>>>> author
>>>>  responds to a review or a comment and to
>>>> a) regularly report via e-mail about open reviews and how long they 
>>>> have
>>>> been
>>>>  open (Ian already has filed an issue to automate this [1]).
>>>> Of course one is not always able to spend the time to do a thorough 
>>>> review
>>>> [2]
>>>> / respond fully to comments, but in this case we should aim to let 
>>>> the
>>>> other
>>>> participants know within the time limit that the task is not 
>>>> feasible so
>>>> that
>>>> they adapt their plan accordingly.
>>>> The first proposal for the time limit would be 72h (which is taken 
>>>> from the
>>>> minimal time that a [VOTE] stays open to allow people in all 
>>>> different
>>>> locations and timezones to vote).
>>>> Another goal would be to abandon reviews, if nobody seems to be 
>>>> working on
>>>> them
>>>> for a while (and we’d need to find out what "a while" could be).
>>>>
>>>> Thoughts on this?
>>>> A good idea or too much process?
>>>> Is the time limit reasonable?
>>>> Please let us know what you think (ideally more than a +1 or a -1 
>>>> ...)
>>>>
>>>> Cheers,
>>>> Till
>>>>
>>>> [1] https://issues.apache.org/jira/browse/ASTERIXDB-1745
>>>> [2] 
>>>> https://cwiki.apache.org/confluence/display/ASTERIXDB/Code+Reviews
>>>>
>>

Mime
View raw message