Computer security has raised the demand for automated tools that can analyze source code for vulnerabilities and defects. Find out how you can put automated static analyzers to work for you.
Many of this magazine’s readers are familiar with the STAR EAST and STAR WEST conferences. Here is a trivia question for you—what does STAR stand for? Give up? It stands for Software Testing Analysis & Review. It is an acronym for the types of activities that we in the software quality field perform. However, in many organizations, the analysis activity is often overlooked, and that’s too bad because analysis is a powerful tool in the quality arsenal.
Static code analysis is computer software analysis that is performed without actually executing that software. (Analysis performed on executing software is known as dynamic analysis.) In most cases, static analysis is performed on the source code. In recent years, the importance of computer security has created an expanded demand for automated tools that can analyze source code for security vulnerabilities and coding defects that could be exploited. Many security vulnerabilities are caused by questionable coding practices—for instance, using an input variable as a loop index without first checking that its value is within a valid range. Contemporary static analysis tools are able to analyze source code with a much lower false-positive rate (claiming code is defective when it is not) than previous lint-style detector tools. Because they examine only small portions of the source code at a time, lint-detector tools typically have false-positive rates of 50 percent or higher. The leading contemporary automated static analyzer (ASA) tools claim—and our experience to date has shown—false positive rates under 20 percent. These tools achieve this by parsing the source code in a way similar to compilers, creating a syntax tree and database of the entire program’s code, which is then analyzed against rules or models. The ASA tools then create a report of suspected defects in the code.
Many of the defect types found by ASA tools would be difficult to find using peer group inspection techniques (the R in STAR). This is because a defect may be a combination of source-code statements that are physically far from each other in the source code—for instance, the allocation of memory and then, pages later, a return statement without releasing that memory. A human is limited in the amount of source code-detailed information that can be remembered from one page to the next. An ASA is not limited by this restriction, and besides, most of us do not relish the prospect of examining other people’s code for hours at a time. ASA tools also can find problems that elude traditional system-level testing—for example, an array-bounds overflow, where a string of twenty characters is written into a buffer of size ten. The ten memory locations that are overwritten may not manifest a failure in the program until a later execution time or may not manifest in a repeatable way.
Since ASA tools are totally unaware of user requirements, the tools cannot replace the benefits of peer reviews or good functional testing. Also, ASA tools will not replace the need to use dynamic analyzers. Dynamic analyzers can find problems that occur only during interactions between the executing application code, system resources, and interfaces such as race conditions.
While ASA tools are not a “silver bullet,” these tools have the capability to detect up to 50 percent of defect types and security vulnerabilities before system testing is conducted, which reduces the amount of time needed for system testing and reduces the risk of defects’ escaping to the field and being discovered by customers. Table 1, from Boris Beizer’s Software Testing Techniques , lists typical defect types and their percentages:
ASA tools are effective on structural bugs (which are approximately 25