The Pros and Cons of Four Kinds of Code Reviews

[article]

the greatest number of defects. We've never seen anyone do this in practice.

The single biggest complaint about pair-programming is that it takes too much time. Rather than having a reviewer spend 15-30 minutes reviewing a change that took one developer a few days to make, in pair-programming you have two developers on the task the entire time.

Of course pair-programming has other benefits, but a full discussion of this is beyond the scope of this article and is already discussed all over the Internet. So:

    • Pro: Shown to be effective at finding bugs and promoting knowledge-transfer
    • Pro: Reviewer is "up close" to the code so can provide detailed review
    • Pro: Some developers like it
    • Con: Some developers don't like it
    • Con: Reviewer is "too close" to the code to step back and see problems
    • Con: Consumes a lot of up-front time
    • Con: Doesn't work with remote developers
    • Con: No metrics or process measurement/improvement

Tool-assisted review
This refers to any process where specialized tools are used in all aspects of the review: collecting files, transmitting and displaying files, commentary, and defects among all participants, collecting metrics, and giving product managers and administrators some control over the workflow. "Tool-assisted" means you either bought a tool ( like ours ) or you wrote your own. Either way, this means money — you're either paying us for the tool or paying your own folks to create and maintain it. Plus you have to make sure the tool matches your desired workflow, and not the other way around. Therefore, the tool had better provide Many Advantages if it is to be worthwhile. Specifically, it needs to fix the major problems of the foregoing types of review with:

  • Automated File-Gathering: As we discussed in email pass-around, developers shouldn't be wasting their time collecting "files I've changed" and all the differences. Ideally the tool should be able to collect changes before they are checked into version control or after .
  • Combined Display: Differences, Comments, Defects: One of the biggest
    time-sinks with any type of review is in reviewers and developers having to
    associate each sub-conversation with a particular file and line number. The tool must be able to display files and before/after file differences in such a manner that conversations are threaded and no one has to spend time cross-referencing comments, defects, and source code.
  • Automated Metrics Collection: On one hand, accurate metrics are the
    only way to understand your process and the only way to measure the changes that occur when you change the process. On the other hand, no developer wants to review code while holding a stopwatch and wielding line-counting tools.
    A tool that automates the collection of key metrics is the only way to keep developers happy (i.e., no extra work for them) and get meaningful metrics on your process. A full discussion of review metrics and what they mean (and don't mean) will appear in another article, but your tool should at least collect these three things: kLOC/hour (inspection rate), defects/hour (defect rate), and defects/kLOC (defect density).
  • Workflow Enforcement: Almost all other types of review suffer from
    the problem of product managers not knowing whether developers are reviewing all code changes or whether reviewers are verifying that defects are indeed fixed and didn't cause new defects. A tool should be able to enforce this workflow at least at a reporting level (for passive workflow enforcement) and at best at the version control level (with server-side triggers).
  • Clients and Integrations: Some developers like command-line tools.
    Others need integrations with IDE's and version control GUI clients.
    Administrators like zero-installation web clients and Web Services API's. It's important that a tool supports many ways to read and write data in the system.

If your tool satisfies this list

About the author

Jason Cohen's picture Jason Cohen

Jason Cohen is the author of Best Kept Secrets of Peer Code Review, the definitive work on the new process of Lightweight peer review with 7,000 copies in circulation. Jason conducted and published the largest case study of peer code review at Cisco Systems®—2,500 reviews over 10 months with a real, delivered product. Jason is also the founder of Smart Bear Inc., makers of Code Collaborator that assists developers and managers with lightweight peer code review processes. Jason has spoken at many conferences, trade shows, and ACM/IEEE society meetings across the country.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

May 04
May 04
May 04
Jun 01