be tested (e.g., it wasn't changed, it is not yet available for use, it has a good track record, etc.); but whatever the reason a feature is listed in this section, it all boils down to relatively low risk. Even features that are to be shipped but not yet "turned on" and available for use pose at least a certain degree of risk, especially if no testing is done on them. This section will certainly raise a few eyebrows among managers and users (many who cannot imagine consciously deciding not to test a feature), so be careful to document the reason you decided not to test a particular feature. These same managers and users, however, will often approve a schedule that does not possibly allow enough time to test everything. This section is about intelligently choosing what not to test (i.e., low risk features), rather than just running out of time and not testing whatever was left on the ship date.
Some companies that make safety-critical systems or have a corporate culture that “requires” every feature to be tested will have a hard time politically listing any features in this section. If every feature is really tested, fine—but if resources do not allow that degree of effort, using the Features Not to Be Tested Section actually helps to reduce risk.
One other item to note is that this section may grow if projects fall behind schedule. If the risk assessment (See sections 6 and 7) identifies each feature by risk (for example H,M and L) it is much easier to decide which additional features pose the least risk if moved from Section 7 to Section 8. Of course there are other options besides reducing testing when a project falls behind schedule and these will be discussed in Section 18, Planning Risks .
Since this section is the heart of the test plan, some of my clients choose to label it Strategy rather than Approach. The Approach should contain a description of how testing will be done (approach) and discuss any issues that have a major impact on the success of testing and ultimately of the project (strategy). For a Master Test Plan, the approach to be taken for each level should be discussed including the entrance and exit criteria from one level to another.
For example: System Testing will take place in the Test Labs in our London Office. The Testing effort will be under the direction of the London VV&T team, with support from the Development staff and users from our New York office. An extract of production data from an entire month will be used for the entire testing effort. Test Plans, Test Design Specs, and Test Case Specs will be developed using the IEEE/ANSI guidelines. All tests will be captured using SQA Robot for subsequent regression testing. Tests will be designed and run to test all features listed in Section 8 of the System Test Plan. Additionally, testing will be done in concert with our Paris office to test the billing interface. Performance, Security, Load, Reliability and Usability Testing will be included as part of the System Test. Performance Testing will begin as soon as the system has achieved stability. All user documentation will be tested in the latter part of the System Test. The System Test team will assist the Acceptance Test team in testing the installation procedures. Before bug fixes are reintroduced into the test system, they must first successfully pass Unit Test, and—if necessary—Integration Test. Weekly status meetings will be held to discuss any issues and revisions to the System Test Plan
|Test Strategies and Plans||68.5 KB|