hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Lucene-hadoop Wiki] Update of "CodeReviewChecklist" by NigelDaley
Date Fri, 13 Apr 2007 21:01:00 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-hadoop Wiki" for change notification.

The following page has been changed by NigelDaley:

New page:
= Code Review Checklist =

Here is a list of things to check during code reviews.

== Coding Style ==

'''new code:'''
 * follows [http://java.sun.com/docs/codeconv/ Sun's code conventions] except indentation
is 2 spaces, not 4
'''changes to existing code:'''
 * maintains existing style

== Documentation ==

'''internal javadoc:'''
 * accurate, sufficient for future maintainability
'''public javadoc:'''
 * accurate, sufficient for developers to code against
 * follows standard javadoc conventions
 * loggers and logging levels covered if they do not follow our conventions (see below)
 * system properties, configuration options, and resources covered
 * illegal arguments are properly documented as appropriate
 * package and overview javadoc are updated as appropriate

== Coding ==

 * implementation matches what the documentation says
 * logger name is effectively the result of {{{Class.getName()}}}
 * class & member access - as restricted as it can be (subject to testing requirements)
 * appropriate {{{NullPointerException}}} and {{{IllegalArgumentException}}} argument checks
 * asserts - verify they should always be true
 * look for accidental propagation of exceptions
 * look for unanticipated runtime exceptions
 * try-finally used as necessary to restore consistent state
 * logging levels conform to [http://logging.apache.org/log4j/docs/api/org/apache/log4j/Level.html
Log4j levels]
 * possible deadlocks - look for inconsistent locking order
 * race conditions - look for missing or inadequate synchronization
 * consistent synchronization - always locking the same object(s)
 * look for synchronization or documentation saying there's no synchronization
 * look for possible performance problems
 * look at boundary conditions for problems
 * configuration entries are retrieved/set via setter/getter methods
 * implementation details do NOT leak into interfaces
 * variables and arguments should be interfaces where possible
 * if {{{equals}}} is overridden then {{{hashCode}}} is overridden (and vise versa)
 * objects are checked ({{{instanceof}}}) for appropriate type before casting (use generics
if possible)
 * public API changes have been publically discussed

== Tests ==

 * unit tests exist for bug fixes and new features, or a rationale is given in Jira for why
there is no test
 * unit tests do not write any temporary files to {{{/tmp}}} (instead, the tests should write
to the location specified by the {{{test.build.data}}} system property)
 * {{{org.apache.hadoop.dfs.MiniDFSCluster}}} and {{{org.apache.hadoop.mapred.MiniMRCluster}}}
are used to start servers as needed (servers are not directly instantiated)

View raw message