Manual vs. Automated Code Review

The Fight for Superiority

Recently, I had the privilege of viewing a great presentation on security-testing strategies given by Vinnie Liu, Managing Director of Stach and Liu. The crux of Vinnie's argument was that, while many professional code reviewers and penetration testers claim that manual code review is always the best and most accurate way to find security defects, there are, in fact, situations in which automated analysis tools (either white box or black box) will outperform an expert human reviewer.

This is not to say that expert reviewers don't have their place, but most design-level security issues cannot be found by automated tools. One good example of this type of vulnerability is improper forgotten-password functionality. On some Web sites, when a user has forgotten his password, the application will prompt him to answer some questions to verify his identity such as, "What was the name of your first pet?" This is not a security problem in and of itself, but not all identity verification questions are equally secure. One verification question that I've seen on a number of Web sites is, "What was the make of your first car?" The problem with this question specifically is that there are only a handful of possible answers. There aren't that many auto manufacturers in the first place and, furthermore, it's unlikely that a first-time car buyer is going to purchase a Rolls Royce or an Aston Martin. Without knowing anything about the user, an attacker could guess Ford, Toyota, Honda, Jeep, etc., and stumble onto the right answer within a dozen tries, in most cases.

The point of this is that there's no way that any kind of automated tool could determine if the make of your first car is a good identity verification question. This doesn't mean that humans are always better than tools, though. Once we start looking at implementation-level defects or vulnerabilities that arise through configuration mistakes, we start to see a number of cases in which a scanning tool will beat a human reviewer. I wrote a September 2008 article for titled
Warm and Fuzzy that extolled the benefits of fuzzing for finding obscure parser errors. What if we wanted to perform fuzz testing manually? Could a human theoretically create millions of different malformed test files and test the application against them by hand? Sure. Would he die of exhaustion and/or boredom long before this? Definitely.

Another situation in which machines outperform people is in finding inadvertently exposed resources. Many sites have "/admin" directories, backup files, password files, or any of thousands of potentially sensitive resources that should never be viewable by the public. Through some misconfiguration or error on the part of the site's administrators, however, they are accessible. Again, could a security expert manually sit down at a browser and try thousands of different resource variations? Yes. Again, though, he would surely die of boredom first. More seriously, code reviewers rarely come cheap and paying experts to perform tasks that can easily be automated is just not a good use of time or money.

The use of both human reviewers and automated tools shouldn't be an either-or proposition. Only humans can find design-level issues such as poor identity-verification questions, while automated tools should be used for brute-force situations like fuzzing or directory enumeration where manual testing would be too tedious and expensive.

Author's Note: Thanks again to Vinnie Liu for sharing his personal experience in this area.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.