such as pages of output, transactions, telephone calls, jobs, semiconductor wafers, queries, or application program interface calls. It has the advantage of being directly related to customer concerns. The common measure may be a natural unit or time unit.
Then you set the total system failure intensity objective (FIO) for each associated system. To determine an objective, you should analyze the needs and expectations of users.
For each system you are developing, you must compute a developed software FIO. You do this by subtracting the total of the expected failure intensities of all hardware and acquired software components from the system FIOs. You will use the developed software FIOs to track the reliability growth during system test of all the systems you are developing with the failure intensity to failure intensity objective (FI/FIO) ratios.
You will also apply the developed software FIOs in choosing the mix of software reliability strategies that meet these and the schedule and product cost objectives with the lowest development cost. These include strategies that are simply selected or not (requirements reviews, design reviews, and code reviews) and strategies that are selected and controlled (amount of system test, amount of fault tolerance). SRE provides guidelines and some quantitative information for the determination of this mix. However, projects can improve the process by collecting information that is particular to their environment.
Prepare For Test
The Prepare for Test activity uses the operational profiles you have developed to prepare test cases and test procedures for system test. You allocate test cases in accordance with the operational profile. For example, for the Fone Follower base product there were 500 test cases to allocate. The Process fax call operation received seventeen percent of them, or eighty-five.
After you assign test cases to operations, you specify the test cases within the operations by selecting from all the possible intraoperation choices with equal probability. The selections are usually among different sets of values of input variables associated with the operations, sets that cause different processing to occur. These sets are called equivalence classes . For example, one of the input variables for the Process fax call operation was the Forwardee (number to which the call was forwarded) and one of the equivalence classes of this input variable was Local calling area. You then select a specific value within the equivalence class so that you define a specific test case.
The test procedure is the controller that invokes test cases during execution. It uses the operational profile to determine the relative frequencies of invocation, based primarily on use but also modified to account for critical operations and for reused operations from previous releases.
In the Execute Test activity, you will first allocate system test time among the associated systems and types of test (feature, load, and regression).
SRE follows the usual test practice of invoking feature tests first. Feature tests execute all the new test cases of a release independently of each other, with interactions and effects of the field environment minimized. It then follows with load tests , which execute test cases simultaneously, with full interactions and all the effects of the field environment. SRE generally invokes the test cases at random times, choosing operations randomly in accord with the operational profile. And of course you will invoke a regression test after each build involving significant change. A regression test executes some or all feature tests; it is designed to reveal failures caused by faults introduced by program changes.
You identify failures, along with when they occur. The "when" can be with respect to natural units or time. This information