In some organizations, reviews are a valuable aspect of the software lifecycle. In others, they are a necessary evil tainted with political bureaucracy and big egos. Suboptimal reviews conducted late in the lifecycle are often misguided due to few objective guidelines that help guide the review process. When used throughout the development lifecycle, code and design quality metrics are valuable inputs to the review process.
Design and code reviews promise to improve software quality, ensure compliance with standards, and serve as a valuable teaching tool for developers. As with most practices, there are subtle nuances surrounding how they're performed that can dramatically affect their value. In some organizations, reviews are a valuable aspect of the software lifecycle. In others, they are a necessary evil tainted with political bureaucracy and big egos. Suboptimal reviews conducted late in the lifecycle are often misguided due to few objective guidelines that help guide the review process. When used throughout the development lifecycle, code and design quality metrics are valuable inputs to the review process.
Reviews Increase Agility
Continuous Integration. Test Driven Development. Refactoring. Pair Programming. Agile practices are abundant, and for many teams interested in increasing their agility, valuable energy and resources have been devoted to improving these practices. Because of this, many teams have abandoned reviews while emphasizing other aspects of agility. But, reviews are an important tool in the agile toolkit.
A driving principle of the Agile Manifesto is continuous attention to technical excellence. Another is embracing and harnessing change as an opportunity to increase customer advantage. For developers, change often begins and ends with modifications to the source code. A poorly designed application with smelly code is a breeding ground for risk that makes change incredibly difficult, and is the greatest technical inhibitor to increased agility. Effective reviews that emphasize design quality and code cleanliness are an important aspect of increased agility. Reviews done right help ensure continuous attention to technical excellence. Unfortunately, not all reviews are done right.
Review Worst Practices
Some development teams find reviews a healthy and valuable asset to developers and the project team. Other teams realize little value from their review process. There are numerous causes for painful and ineffective reviews. Some symptoms of ineffective reviews include:
- Witch hunt reviews - Many reviews degrade quickly into attack and defend mode. This often occurs because the developer who wrote the code feels attacked and threatened when reviewers make direct and opinionated statements about the code. Nothing could be less productive.
- Curly brace reviews - Some reviews emphasize formatting and comments instead of more serious problems. Is placement of curly braces and misspelled comments really that important? Curly brace reviews are feeding ground for the anal retentive, and provide no real value.
- Blind reviews - Often times, reviewers walk into the review meeting having never laid eyes on the code they are about to review. Most of the review time is spent trying to figure out what the code does. Spending time in the review meeting attempting to understand the code instead of reviewing it for more serious ailments is a waste of time.
- Exclusionary reviews - Many times, the code provided for the review is only a sampling of the code written. For example, unit tests might be excluded from the review. In an unhealthy review environment, providing impartial and incomplete code listings will leave the reviewers wondering how the code actually works.
- Tree killer review - If you can't baffle them by providing half of what they need to understand the code, then maybe overwhelming them by providing thousands of lines of code might work. Waiting until the codebase is incredibly large to host the first review is entirely ineffective. Not only is it to difficult to provide effective feedback on a large codebase, these reviews are often held late in the lifecycle and do not allow the developer to improve her code based on the feedback received.
- Token review - It's not uncommon for management to dictate that reviews be held. In fact, that's the reason I held my first review. Fortunately, I decided that if I was required to host them, I was going to try to make them effective. Token reviews are typically held for political reasons. Management wants to ensure that all code is reviewed for auditing purposes. Unfortunately, developers realize very little value surrounding these reviews. Any problems found are not fixed unless they are absolutely critical. Since the primary motivation is an audit trail for management, the team has little motivation to improve the code.
- World review - I've been to some reviews that probably should have been held in an auditorium given the number of people in attendance. This can be incredibly intimidating for the developers whose code is being reviewed, and I'm not sure what value it provides to invite so many people. A few developers, up to five, should serve all the needs required of the review process. If more people want to provide input, there are better ways.
A few simple tactical adjustments can alleviate many of these traditional review worst practices. Let's explore some ways to improve the value of the review process.