How does that compare to QA? Is this really worth it?
Over the last four years we've arrived at the answer: We can review code in an Agile, lightweight way using a development tool that integrates source code viewing with chat room collaboration. The tool must gather metrics automatically, because although metrics are critical to process improvement they are eschewed by developers. The tool must do just enough to support developers, but not be so inflexible or all-encompassing that the tool starts dictating the process.
This tool — Code Collaborator — is the culmination of everything we've learned about lightweight peer code review at Smart Bear. We even have the data that proves it works.
But as a critical reader you might read this and think "Please, no more tools, I'm drowning in tools, I don't believe you anyway."
In the six remaining articles in this series we will present our stories, our data, and our theories on how code review can be lightweight and fun, yet effective, measurable, and give a satisfactory answer to all the questions above.
So finish the story already!
By now you can guess how the story ends. Using arguments not unlike those above, Mr. Metrics and I convinced Mr. CTO to at least try our lightweight code review technique in a pilot program with a one development group that was already hopelessly opposed to Fagan inspections. The metrics that came out of that group demonstrated the effectiveness of the lightweight system, and within 18 months Code Collaborator was deployed across the entire organization.