parser errors. What if we wanted to perform fuzz testing manually? Could a human theoretically create millions of different malformed test files and test the application against them by hand? Sure. Would he die of exhaustion and/or boredom long before this? Definitely.
Another situation in which machines outperform people is in finding inadvertently exposed resources. Many sites have "/admin" directories, backup files, password files, or any of thousands of potentially sensitive resources that should never be viewable by the public. Through some misconfiguration or error on the part of the site's administrators, however, they are accessible. Again, could a security expert manually sit down at a browser and try thousands of different resource variations? Yes. Again, though, he would surely die of boredom first. More seriously, code reviewers rarely come cheap and paying experts to perform tasks that can easily be automated is just not a good use of time or money.
The use of both human reviewers and automated tools shouldn't be an either-or proposition. Only humans can find design-level issues such as poor identity-verification questions, while automated tools should be used for brute-force situations like fuzzing or directory enumeration where manual testing would be too tedious and expensive.
Author's Note: Thanks again to Vinnie Liu for sharing his personal experience in this area.