While many articles do a great job of explaining the benefits of various ways to inspect software, relatively few address the issue of how to introduce software inspections into your development process. In practice, this turns out to be far from trivial.
Perhaps even more than other changes to the development process, attempts to introduce "traditional" inspections generally meet with a lot of resistance. Most objections are along the lines of "it's difficult," "it costs too much," or "we don't have time."
In this article, I will analyze objections to inspection using insights from diffusion research, explain how outsourced software inspection services work, and show how they can address many of the objections. The conclusion is that you can use an outsourced software inspection service to jump-start your organization into embracing the concept of inspection.
Causes of Resistance to Inspection
In Diffusion of Innovations, E. M. Rogers identifies five factors that characterize innovations:
- Relative advantage–the perceived advantages over the existing situation or other alternatives
- Compatibility–the degree to which it is felt that the innovation is consistent with the existing situation
- Complexity–the degree to which the innovation is difficult to understand
- Trialability–the degree to which an innovation can be tried
- Observability - the ease with which the benefits of an innovation can be observed, imagined, or described
Innovations that are perceived as having less complexity and greater relative advantage, compatibility, trialability, and observability will be adopted more rapidly. This model helps us to understand why inspection as a broad concept has met with such a slow adoption rate.
Let's investigate each of these factors in more detail by looking at the perceptions people hold relative to starting an inspection process.
1. Relative advantage. There is an impressive amount of literature (see An Encompassing Life-Cycle Centric Survey of Software Inspection by Oliver Laitenberger and Jean-Marc DeBaud for a very recent overview) reporting that inspections indeed offer an advantage, since they can remove up to 88 percent of all defects. No other defect detection technique has been shown to find more than half. But to understand the adoption of innovations, you need to look at the perceived advantage. In fact, many people don't see the benefits of inspection; all they see are what they perceive as disadvantages:
- Fear of being exposed. A fundamental element of most inspection techniques is to have your peers review your work products. Few people are comfortable with the idea of others finding defects in their code. While they will usually not dare to directly voice this fear, it might explain the vigor with which they raise some of the other objections.
- Fear of losing control. Most engineers have a strong need to be in control of their work. By showing their work to others, and accepting proposals for improvements, they are essentially granting others control over the way they do their work.
2. Compatibility. Inspections do not require new tools and equipment, and from a process point of view, they are independent of existing testing phases. However, they do have influence on the timing of all subsequent testing phases. This characteristic leads to the following objections:
- We have no time for this. This is a classic argument used against any kind of preparatory or preventive work: scheduling time for these activities moves out all remaining milestones, unless you're willing to believe you will earn back the investment later. Even if you believe you will make up for the time "lost" in inspection, you have to make adjustments to the initial parts of your development schedules, which will have an impact on the resource planning for testing. Given the importance that many higher-level managers put on the question "When do we start testing?" moving out the start of testing can be really hard to justify.
- We don't want to touch our process. You would hear this argument in an organization that has recently invested a lot of time, energy, and money into improving their software development process and is wary of introducing yet another change. They might feel they already spend an awful lot of time and money on finding defects, and would need to do a lot more measurement with regard to the effectiveness of the tools and processes they just installed, before they add more to their process.
- Fear of bureaucracy. Then there are those who have not invested heavily in a development process, because they love their informal culture. They feel that inspection increases the amount of bureaucracy and restricts their creative approach to software development.
3. Complexity. Inspections can be performed in different stages of the development cycle. Requirements, designs, and implementation/code can all be inspected. Development principles like separation of concerns, as supported by the V-model, help to keep the complexity under control. But the sheer size of modern software applications, where a mobile phone application can be a million lines of code, adds a whole other dimension to complexity. The perception of high complexity is strongly related to this dimension:
- How can this be better, faster, and cheaper? The fear of delayed schedules, while the actual measures show otherwise, is perhaps the best example that the perceived complexity of inspection is high.
- We don't have the resources. Even if the relative advantages are understood, there is the practical complexity that you need people who are proficient in the programming language, have sufficient application knowledge, have been trained on doing inspections, and last but not least, have the right attitude. It may be far from simple to find these people and direct their effort towards inspection, away from what they're doing now. Often, the attitude of test engineers is better, but they might not have the coding skills that are needed to do an effective inspection.
4. Trialability. To get meaningful results, you need to inspect a nontrivial piece of code, which will usually involve quite a few people. This leads to an additional concern:
- The investment is too high. Introducing inspection means training people and setting up a structure to collect and process metrics.
5. Observability. The measured results cited in volumes of inspection literature speak for themselves. But observability also refers to the extent to which the benefits can be imagined or described. This turns out not to be easy for the majority of managers and developers. Some of the factors are:
- Overconfidence in testing effectiveness. Many defects are found during testing, and a lot of time and money may have been sunk into tools that measure test coverage, and developing test cases that increase coverage. Having reached a test coverage of 80 percent of the statements, it is tempting to assume that you're finding 80 percent of the defects (which you aren't because you're covering only a fraction of the code paths, and an even tinier fraction of the combinations of values that the variables may have on these code paths). It is also tempting to think that you only have a little more to go to cover the remaining 20 percent of the code (which isn't true, because getting more coverage gets harder and harder).
- Overconfidence in testing tools. A related phenomenon is the belief that, since tools such as Purify and Insure++ are very good at finding memory leaks and null pointer dereferences, these defects ought to be found by these tools. However, these tools will not find these defects unless there is a test case that exposes the defects, and if you can find the defect earlier at less cost (e.g., through inspection), you're better off.
- Misunderstood responsibilities. Many programmers operate under the ssumption that it is their responsibility to implement functionality, and it is the responsibility of the testing/QA department to find defects. To them, doing inspections would be doing the tester's job.
Outsourced Software Inspection Services
Outsourced inspection services, a new approach that has emerged over the past few years, may be a way to address many of the specific issues mentioned above. A typical outsourced inspection service consists of the following steps:
Step 1. Collect source code. In this step you decide which code you want inspected, and you make a package that includes the source files and any header/include files that may be needed to compile the source files. The service provider may provide tools to assist in this process.
Step 2. Complete an application survey that gives the service provider the technical details needed to inspect your application. This includes data such as the target operating system, the language the application is written in, version number of the compiler, and so on. There may also be questions regarding preferences, such as "do you want to receive reports of null pointer dereferences in out-of-memory condition?"
Step 3. Submit the application files plus the survey to the service provider via secure FTP, through a Web site protected with SSL, or on a CD-ROM using a secure courier service. For extra security, the package may be encrypted.
Step 4. The service provider inventories the code, and supplies you with an inventory and possible cost adjustment.
Step 5. The service provider applies static analysis technology (see, e.g., "Value Lattice Static Analysis" by Bill Brew and Maggie Johnson in Dr. Dobb's Journal, March 2001) to inspect the code and produce a set of inspection points. The static analysis is parameterized by the answers to the questions on the survey.
Step 6. Trained engineers at the service provider remove false positives, trying to ensure that only real defects will be reported to you.
Step 7. You receive an inspection report containing detailed descriptions of the defects found, and a management report containing metrics on the inspection (and trend information if you've sent the same application before).
The first thing I'd like to clarify is that by nature, outsourced inspection services are complementary to other quality initiatives. Outsourced inspections are generally poor at identifying functional defects, which require knowledge of the domain the code is dealing with.
Nonetheless, it is an effective way of finding structural defects such as NULL pointer dereferences, memory and other resource leaks, uninitialized variables, bad deallocations, etc.
Second, since it is impossible to simulate every single execution path with every possible value, and because tradeoffs have to be made to reach acceptable false positive and false negative rates, outsourced inspections don't find every instance of these structural defects (just like manual inspections won't find every instance).
Jump-Starting Inspection by Removing the Barriers
Outsourced inspection services have a number of unique features, many of which address objections that are often raised in relation to the general concept of inspection.
No use of in-house resources. This is the point of outsourcing: apart from a little bit of effort to package the source code and fill in the survey, these services do not require any in-house resources. This addresses compatibility by countering the objection "We have no time for this," complexity by countering the objection "We don't have the resources," and the trialability objection "The investment is too high."
Makes testing more effective. By finding nasty defects such as NULL pointer dereferences and uninitialized variables before testing, there will be fewer disruptions of functional testing, so your functional testing will be more effective. This addresses the complexity concern "How can this be better, faster, and cheaper?" by showing where time can be gained downstream..
Independent third-party perspective. Every programmer's work is judged against the same standard by a third party that has no investment in the code generation. This is considerably less threatening than the prospect of having your work taken apart by a colleague, and thus addresses two relative advantage objections: fear of being exposed and fear of losing control.
Finds defects that testing misses. A research group at a large European corporation performed an independent validation of the effectiveness of outsourced inspections by outsourcing inspection of an embedded application that had already been inspected in-house and unit tested. The subsequent outsourced inspection revealed thirteen heretofore undetected defects, of which eleven needed to be fixed. This addresses all three objections under observability: overconfidence in testing effectiveness, overconfidence in testing tools, and misunderstood responsibilities.
Closes the feedback loop. When people are confronted with their failings, their natural instinct is to take measures to prevent any future recurrence. A typical situation in dysfunctional software development organizations is that junior programmers are inadvertently inserting defects into code while senior developers are busy tracking down and fixing previously inserted defects. Outsourced inspection encourages the reverse situation, where experienced developers are freed up to focus on new coding while their less experienced colleagues can learn by fixing the defects identified as a result of the
inspection process. Like the previous item, this addresses all three objections
Handoff creates quality data. An important component of the inspection process is measuring the numbers of defects found during inspections. Because the outsourced inspection process does not fix the defects, but only reports on them, the organization is presented with documentation supporting the measurement of numbers of defects, defect distributions, and trends. The handoff also increases compatibility by handling the objections we don't want to touch our process and fear of bureaucracy.
There are many steps critical to improving the quality of the software you're developing, such as introducing requirements and design inspections. At a minimum, though, outsourced code inspections should show the value of early defect detection without a huge up-front time and resource investment. Once your organization is used to an independent third party examining its code, the barriers to having peers inspect each other's code should also be lower.
Acknowledgments. I'd like to thank Chris Verhoef for pointing out the relevance of Diffusion Research to understanding the slow adoption of inspection, and Pat Bitton, Lawrence Markosian and Rix Groenboom for numerous useful suggestions.
- Robert L. Glass, "What's so Great About Inspections?!?" 8 June 2001.
- Oliver Laitenberger and Jean-Marc DeBaud, "An Encompassing Life-Cycle Centric Survey of Software Inspection," ISERN-98-32.
- Rix Groenboom and Walter Loeffel. "Automated Software Inspection," Informatik, Feb. 2001.
Please send your comments to the author at [email protected].