Checklists – You build me up just to knock me down

[article]

list and use automated tools where helpful.

An item that appears on almost every checklist is: "Does the code accomplish what it is meant to do?"  Is this question really necessary?  It's a code review - what else would you expect the reviewer to do?  Another common one is: "Can you understand the code as written" or, similarly, "Is the code documented properly?"  Again, it's not possible to review the code at all without having to understand it.

Don't waste precious checklist items on directives that the reviewer will do anyway.  The second rule is: No obvious items .

In my experience, the majority of items on long checklists pertain to simple programming rules or coding style.  Style items include naming conventions, whitespace and block positions, and where comments are located.  Simple programming rules identify basic
programming errors and are often specific to a language.  A Java rule might be "Always override hashCode() if you override equals()," whereas a C# rule could be "Classes
marked with the Serializable attribute must have a default constructor."
 
These items can be found by static analysis tools.  At least with mainstream languages, there are dozens of tools, both free and commercial, that check for style and basic
programming errors.  You cannot use the excuse that some cost money to buy and all cost time to set up - it cannot possibly be more expensive than having a human spend time digging up these mundane errors during a code review - developers need to be doing things that only a human can do, like checking code correctness and maintainability. 
 
The third rule is: No items that can be automated .

So which items should stay on the checklist?

Before we had a proper build system, we used to manually update the build number in a header file.  Don't laugh - we've all been there!  Of course we'd always forget to kick the number and, even with code review, the reviewers forgot to check for that too.  After all, if that header file wasn't included in the review, there was nothing to jog the reviewer's memory.  It's one thing to verify something that's put in front of you; it's a lot harder to review that which is missing.

If it's easy for the author to forget, the reviewer will forget too.  Only a checklist can help
this situation.  So the fourth rule is: Items  should be things that you commonly forget .

And now, an example of a Perfect Checklist Item:  "Recovers properly from errors everywhere in the method."  Incorrect error-handling often turns error dialogs into fatal crashes or memory leaks.  In my experience, few reviewers remember to look at error paths, and authors forget too, especially when a change is made in a large method.  You can't generally automate the check for correct unwinding, especially when the method is
changing shared state like global variables, registering for events, or accessing external resources.

Building the List: The Week of Pain

The question I get asked most about checklists is: Do you have a sample checklist we can use?  As a politician would say, this is the wrong question.  The right question is: How do we build a checklist appropriate for our software development, and how do we adjust the list over time?

Let's make this personal.  I've seen the following technique applied successfully on several occasions.  It's a take-off of the "personal checklist" concept from the Personal Software Process from the Software Engineering Institute.

Try this sometime: For one week, every time you make a mistake, no matter how small, write it down.  Everything.  Misspell a word while writing an email?  Write it down.  Close an application when you meant to close just one window?  Write it down.  Have a lingering compiler error when you run the unit tests?  Write

About the author

Jason Cohen's picture Jason Cohen

Jason Cohen is the author of Best Kept Secrets of Peer Code Review, the definitive work on the new process of Lightweight peer review with 7,000 copies in circulation. Jason conducted and published the largest case study of peer code review at Cisco Systems®—2,500 reviews over 10 months with a real, delivered product. Jason is also the founder of Smart Bear Inc., makers of Code Collaborator that assists developers and managers with lightweight peer code review processes. Jason has spoken at many conferences, trade shows, and ACM/IEEE society meetings across the country.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Nov 09
Nov 09
Apr 13
May 03