Test Accreditation - Minimizing Risk and Adding Value


testing facilities are in-house. In the extreme case, when the testing staff are also the development staff, then it is likely that this stage of the process will be very difficult to handle effectively because of the lack of separation between developer and tester.

This stage of the testing process is critical to the remainder of the process. It requires the involvement of testing personnel who are able to understand the purpose of the software, how it has been implemented and any relevant regulatory requirements, eg Australian Taxation Office, industry requirements. Staff performing this review must understand the limitations of their testing.

The testing facility will often prepare an assessment of the remanent software risk associated with the proposed testing. An iterative process may result as the client attempts to find an acceptable balance between risk and testing cost. This process of negotiation and agreement between client and laboratory is critical to the success of any testing. For many testing projects this can involve a significant proportion of the total resources used on the project. The need for the more critical areas of the software to be identified and adequately tested is crucial for the client.

The agreed test plan, howsoever named and presented, is the basis for the testing and for development of test cases and is a major link in achievement of satisfactory outcomes. Without proper management and technical input at this stage reliable test results cannot be achieved.

5.2 Test and calibration methods and method validation (ISO/IEC 17025, clause 5.4) This is a very significant area which is often handled poorly. Normal practice is to devolve the high level test plan down into a series of tests and test cases. Extreme care needs to be taken to ensure that the tests and test cases chosen do, in fact, meet the principles of capability, validity and repeatability. ISO/IEC 17025 requires validation of test methods. This activity aims to check that the tests and test cases are capable of testing the software, do produce valid results and are repeatable. The effort required can be in proportion to the risk involved. Some more trivial tests can be validated “by inspection”, whereas complex tests will need to be validated using more detailed techniques. Validation can be done in various ways including:

  • code inspection,
  • using the proposed method to test some known, or reference, software with well known performance and inter-comparing the results,
  • running the proposed test on software containing a suitable range of known “seeded” errors.
  • comparing its performance against another proven test technique. Ideally, validation should be performed by someone other than the person who designed the test.

The use of automated test tools is a critical issue. The ability of a commercial test tool to be used within its intended range of application is not required to be validated by the test laboratory. The real issue is whether the software under test actually falls within the intended scope of application of the test tool, ie is the test tool capable of testing this particular software. Similarly any test tool developed in-house by the laboratory needs to be validated, ie its ability to test the software under test to produce valid results repeatedly, must be confirmed.

Test methods and test cases must be documented. They form the instructions to the tester. If the tester follows the method they are also part of the record of the test. Records of validation are required to be maintained. These can be an aid to analysis of suspect results.

The performance of invalid tests is a waste of time. Time lost analyzing flawed results

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.