Seven Deadly Sins of Software Reviews

Member Submitted

find defects, their review contributions are minimal. Participants who are there only to learn may benefit, but they aren't likely to improve the quality of the product. Management participation in reviews may (but doesn't always) also lead to poor review results. If the team feels the manager is counting the bugs found to hold against the author at performance appraisal time, they may hesitate to raise issues during the discussion that might make their colleague look bad.

Large review teams can also be counterproductive. I once participated in a review (ironically, of a draft peer review process) that involved 14 reviewers. A committee of 14 can't agree to leave a burning room, let alone agree on what's an error and how a sentence should be phrased. Large review teams can generate multiple side conversations that do not contribute to the review objectives and slow the pace of progress.

Solutions: Review teams having 3 to 7 participants are most effective. The reviewers should include the work product's author, the author of any predecessor or specification document, and anyone who will be a victim of the product. For example, a design review should include the designer, the author of the requirements specification, the programmer, and whoever is responsible for integration testing. On small projects, one person may play all these roles, so ask some of your peers to represent the other perspectives. It's okay to include some participants who are there primarily to learn (an important side benefit of software reviews), but focus on people who will spot bugs.

I'm not dogmatic on the issue of management participation. As a group leader, I also wrote software, so I needed to have it reviewed (thereby setting an example for the rest of the team), and I was able to contribute usefully to reviews of other team members' products. This is very much a cultural issue, dependent on the mutual respect and attitudes of the team members. A good rule of thumb is that only a first-line manager is permitted in a review, and only if it is acceptable to the author. Managers can never join in the review "just to see what's going on."

Reviewers Focus on Style, Not Substance
Symptoms: Whenever I see a defect list containing mostly style issues, I'm nervous that substantive errors have been overlooked. When review meetings turn into debates on style and the participants get heated up over indentation, brace positioning, variable scoping, and commenting, they aren't spending energy on finding logic errors and missing functionality.

Solutions: Style can be a defect, if excessive complexity, obscure variable names, and coding tricks make it hard to understand and maintain the code. This is obviously a value judgment: an expert programmer can understand complex and terse programs more readily than someone who has less experience. Control the style distraction by adopting standard templates for project documents and coding standards or guidelines. These will help make the evaluation of style conformance more objective. Use code reformatting tools to enforce the standards, so people can program the way they like, then convert the result to the established group conventions.

Be flexible and realistic about style. It's fine to share your ideas and experiences on programming style during a review, but don't get hung up on minor issues and don't impose your will on others. Programming styles haven't come to Earth from a font of Universal Truth, so respect individual differences when it is a matter of preference, rather than clarity.

Any software engineering practice can be performed badly, thereby failing to yield the desired benefits. Technical reviews are no

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.