would I be making a change to this code? After all, it's the ability to respond to change economically that separates good code from bad.
But wait, you say, shouldn't a code review also focus on whether the code actually works? Yes, and that's why you bring your automated tests to the code review. Without tests to review, the only way to determine if the code works is to painstakingly trace through the logic using mental assertions. By including automated tests and their results in the code review, reviewers can objectively validate whether the code works as expected. And as an added bonus, you'll get a valuable review of the tests themselves. Reviewers may identify corner cases you may have missed, for example, and in turn your tests become more solid. You'll be a better tester for it.
Use Automation to Your Advantage
Early on I said that metrics from static analysis are largely irrelevant. That doesn't mean we should ignore them altogether, especially if the metrics can be generated cheaply. Tools that automatically detect code that can be improved and collect a manageable number of useful metrics are well worth the one-time setup cost. Put them on a recurring schedule and they'll continually keep watch over your entire codebase throughout the development cycle. Then during your code reviews you can focus on things that a computer can't check for you, such as good naming and documentation.
Choose code-checking tools that you can tailor for your project. If a tool's inspection rules don't jibe with your projec's style, the results are meaningless to you. If the volume of output is too overwhelming, nobody will pay attention. It's a delicate balance, so be sure to start with a minimal set of valuable metrics and only add new metrics if you're sure somebody will care. I’ve had success using the following free tools on Java projects:
- PMD: A static Java code analyzer that includes a boatload of built in rules and supports the ability to write custom rules. CPD (the Copy/Paste Detector) is an add-on to PMD that uses a clever set of algorithms to find duplicated code. ( http://pmd.sourceforge.net)
- Checkstyle: A highly configurable coding standard checker with a default set of standard and optional checks. ( http://checkstyle.sourceforge.net)
- Cobertura: A code coverage analyzer that identifies areas of code that aren’t covered by tests. Unfortunately, code coverage metrics are often used as instruments for programmer (and tester) abuse. When used properly as constructive feedback, these metrics can help improve your testing skills by identifying aspects of code that often go untested. ( http://cobertura.sourceforge.net)
- JDepend: A static code analyzer that generates design-quality metrics based on Java package dependencies, including identifying circular package references. Disclaimer: Your humble author wrote this tool. ( http://www.clarkware.com/software/JDepend.html)
When you run these tools, it's important to bear in mind that all code and design metrics are imperfect. Software is a human activity, and as such we should trust the judgment of our teammates over the cold output of a machine. Use metrics as a guide—not as a big stick with which to pummel your fellow programmers.
Give Neglected Code a Good Home
So you've tenderly molded your code into good shape using all the craft techniques we've discussed, and then one day you get saddled with maintaining code that didn't get as much love. How do you go about crafting it into something you'd call your own? The answer: One line at a time.
In the same way you incrementally improved code you already owned, you can improve new code that comes your way. That
|Staying Out of Code Debt||133.77 KB|