Software development is largely a communications exercise. The business organization must communicate extensive detail to the software development group about business processing rules and requirements. Technical designers and business experts need to negotiate technical solutions that meet the constraints and capabilities of the software while fulfilling the business requirements. Developers, coders, and testers must accurately receive the final specification so that the software can be built and tested. If that communication process breaks down at any point in the chain, the message becomes garbled, causing missing or defective functionality, ineffective testing, or a combination of the two.
Unfortunately, when it comes to implementing more effective software engineering processes, both cultural and economic issues stand in the way. Unless the product is life-critical, such as medical or aviation software, a rigorous software engineering process is difficult to justify on a risk/reward basis. For the general business community, formal software engineering processes are implemented ad hoc and at the discretion of the project managers and supervisors. Business experts, especially those who serve as requirements providers part-time, aren't likely to invest in the learning curve required of formal techniques. Since most software costs are straight expense items (primarily labor cost), software development is represented on the books as ordinary expense rather than a capital investment. This makes investment in software engineering tools and training a tough sell.
The challenge for the software development managers is to find tools and techniques that produce systems with very high reliability without requiring massive cultural changes in the IT and business organizations that build them. Addressing this challenge requires understanding the two biggest contributors to poor software quality. First, most software defects and shortcomings result from poor specification of functional requirements. In effect, many software reliability issues stem from the fact that we know how to code programs, we are just never clear on what to code. The second issue is software test coverage. Typically, software test cases are designed manually by a test engineer with no particular test coverage criteria in mind. As a result, the typical test case suite covers only about 60 percent of functionality. Test automation has gained ground, but most tools simply automate tests that are manually designed. If the design of a particular test is flawed, or if the suite of tests does not provide full coverage, the test automation offers limited value.
Model-based testing is based on the premise that lowering costs and improving software reliability require a tight link between functional specifications and test cases. The test process should find problems in the specification of requirements and guarantee that the functionality called out in the specification is completely exercised during the testing effort. If testers can develop full-coverage test scripts directly from quality specifications, they can be highly confident that the functionality has been successfully translated into the delivered applications.
The process assumes that if the specifications can be modeled rigorously, and this model can automatically create equally rigorous test scripts, the functional integrity of the software will significantly improve-even if other project factors such as scheduling, detailed design, technical design, and project management remain problematic. By using automated test design processes, not only will the test cases cover all of the functionality requirements, but the time and effort required to do the test design will be substantially reduced. Furthermore, if this process can assure the basic functional integrity of software, other process improvements can be tackled incrementally to address other aspects of the software lifecycle.
Model-Based Testing Overview
The model-based test process begins when the requirements team writes the specification using existing specification formats and processes. A test-modeling specialist translates the specification into a graphical model of the processing logic, inputs, and outputs. The graphical approach is an adaptation of Cause-Effect state model diagrams, with each cause and effect representing a component of the business processing logic that the software must execute. The modeling process immediately identifies any inconsistencies and ambiguities. Missing causes, missing effects, or unclear interactions between causes and effects as documented in the specifications are clearly exposed. For example, the specification may state:
"…Add input value A to value B. This number must be positive…"
This raises the following questions:
- Which number must be positive?
- What happens if the value is not positive?
The test modeling process is design-neutral; however, awkward user interfaces and other indicators of poor design tend to be exposed by virtue of their excessive complexity in the modeling.
All such questions about the exact requirements are documented as "specification ambiguities" and passed to the appropriate analyst or designer for resolution. The ambiguity review process has proven extremely effective in finding problems in specifications. It differs from normal specification walkthroughs because the review is done in the context of creating a rigorous logical model of the expected behavior. Ambiguities are identified and documented by the test engineer doing the modeling work. Modeling expertise on the part of the business analysis or interface designers is not required and, frequently, these groups are unaware of the test models being constructed. Thus the project receives the benefits of a rigorous software engineering process at a key leverage point in the lifecycle without requiring broad team knowledge of the process.
When the test model is complete, COTS (commercial, off-the-shelf) software is used to generate detailed test scripts directly from the model. In principle, any test design software that evaluates logic structures could be applied, but my company uses the Caliber-RBT product, which generates test scripts based on stuck-at-one, stuck-at-zero coverage criteria. A number of studies have shown that the stuck-at-one approach offers a high level of logical coverage. By running the test design tool directly against the completed test model, detailed test descriptions are immediately available, removing the requirement for a separate, manual test case design effort. Instead, the test scripts that will exercise the specified functionality are available at the same time the specification itself (cleansed of ambiguities) is published. These test scripts also confirm the expected behavior of the proposed system, either in additional walkthroughs or as a separate reference document. The scripts are also immediately available to QA teams for the purpose of creating appropriate test data and beginning test execution planning.
Model-Based Testing Results
The model-based testing process yields extremely high reliability production implementations, independent of the type of application or the platform on which it is implemented. Typically, applications will encounter few, if any, post-implementation defects in functional logic. This does not imply that the software is "proven" correct; black box testing of any kind will not necessarily detect defects in coding structures that are dependent on specific execution paths or sequences. However, as a practical matter, the process is highly effective for the following reasons:
- The modeling process forces a rigorous examination of the logical consistency of the specification, the largest root source of software defects.
- Automatic design of test scripts directly from the model removes the requirement for manual test case design, thus removing a manual link in the communication chain through which errors can be introduced.
- Automated test scripts always provide full functional coverage. This cannot be guaranteed when individual test analysts with varying experience and commitment levels design the tests.
- The process significantly shortens the feedback loop from requirements to test. This has a positive impact on the time and cost to fix problems.
Model-based testing delivers high reliability without adding cost to the requirements and testing phases. Once an organization achieves this level of reliability, it can address other aspects of the software development process without fear that software reliability will be compromised. For example, the organization can try new budgeting and project management techniques or pilot new software development tools. New users of model-based testing often address requirements specification formats and processes with a goal of reducing the number of ambiguities raised by the test modelers.
Several case studies have been conducted to quantify the value of the model-based testing approach.
E-Commerce Enterprise. In A
Case Study in Extreme Quality Assurance Jim Canter and Liz Derr provide an extensive analysis of the impact of an enterprise-wide implementation of model-based testing at an e-commerce firm. In their analysis, the number of post-implementation defects in software using traditional requirements analysis and testing techniques averaged eight to ten per implementation. This defect rate was independent of the number of people testing the application, which suggests that added test resources don't necessarily result in finding more defects. Application of model-based testing resulted in implementations with zero to three defects per release. In addition, the number of defects discovered during test execution also dropped, indicating that the model-based ambiguity reviews were effective in identifying and resolving requirements issues before they had the opportunity to become defects in the actual software.
Commercial Banking Project. Software Prototype Technologies undertook mplementation of model-based testing in a project for a major U.S. bank. The business objective was to create a software system that would allow business banking customers to be able to view and manage their accounts online via the Internet. The project structure for this effort was particularly complex and involved two business banking user organizations, five existing software systems, an off-shore software development vendor, QA staff from the corporate IT department, and a separate requirements management/test modeling team (creating both the specifications and the test model). Model-based testing processes were applied to the core application under development but not to interfacing systems being modified (these systems used their existing specification and testing procedures). The core application accounted for about 70 percent of the total functionality
Five weeks after production implementation, a breakdown of reported production issues was developed and is shown in the following chart:
In this chart, "non-defect reports" represent reported problems that were, in fact, correct as defined in the specification. "Data conversion" issues resulted from unanticipated production data configurations.
Hardware Configuration Engine. In another example, Sun Microsystems implemented an online rule-based software engine to assist their worldwide sales staff in configuring hardware suites for customers and subsequently generating price quotes. The hardware configuration options and rules are extremely complex and change frequently as new products are introduced and older products discontinued. Accepted quotes are transmitted directly to the factory floor for assembly, making software defects that allow invalid configurations very costly.
Organizationally, this project was another example of fragmented roles and responsibilities. A third-party vendor provided software development. Separate marketing groups and engineering specialists defined requirements for each product line. The resulting requirements documents frequently conflicted and always assumed that the audience was very knowledgeable of the products. Testing and configuration management were handled by a separate in-house QA group. Further, product releases and competitive pressures dictated that new software be released to the worldwide sales staff on a monthly basis.
The project transitioned to a rule-based testing approach after defect rates (including invalid configurations) reached unacceptable levels. Transitioning to model-based testing required that the modeling team process a significant inventory of existing specifications and product announcements, resolving ambiguities in pre-existing functionality as well as new functions and rules for the pending releases.
Again, there is a clear and rapid trend toward high software reliability when the model-based testing processes are applied. Defects not only declined significantly, but major defects such as invalid configurations were eliminated entirely.
Implementing Model-Based Testing
Before you make the leap into model-based testing, be prepared to address significant obstacles. The introduction of formal software engineering practices into general business enterprises creates cultural and institutional impediments and affects the roles of test engineers.
Ironically, the most difficult implementation issues often fall within the QA organization. Many traditional QA professionals are deeply suspicious of test cases designed by software tools, believing that the test cases they manually design are superior to those produced using software algorithms. Time and time again, we've come across QA professionals who are reluctant to accept a new approach to test design.
If this is the case at your organization, you should be prepared to counter resistance with facts. Several case studies have been run to compare the functional variation coverage (the quantitative coverage metric for stuck-at-one test algorithms, which are used in Caliber-RBT). As expected, the automated test design provides 100 percent functional coverage. Manually designed test cases generally provide 60 to70 percent coverage.
Adopting model-based testing practices may also require significant procedural changes, staffing adjustments, and training efforts since the process demands a different skill set and new team interactions. For example, test engineers must be competent at creating complex logic models-a very different mental process from writing test cases. The model-based test process is less concerned about "breaking" the code than it is about ensuring complete functional understanding in the specifications. The test engineer must now focus on accurately modeling the requirements rather than attempting to conceive of test cases that will find defects. Test engineers must also separate the process of test case design from those of test case implementation and execution. Traditionally, these tasks are blurred or run together, creating the illusion that traditional test approaches take less time.
Model-based testing forces the testing group to be more directly involved in the requirements process and also opens testing products (the generated test scripts) to other groups on the team. This is often seen as contrary to the traditional view of testing as an independent and relatively isolated process.
Perhaps the most important prerequisite to the implementation of model-based testing is to elevate the discussion of software testing to the senior management level-the place where the repercussions of software failure are eventually felt most. An organization that accepts the dramatic risks of software failure will better embrace the processes for reducing risks. When this happens, software testing can transition from the stepchild of software development into the significant role it deserves within an organization.