Test Accreditation - Minimizing Risk and Adding Value

    • software being tested and cover the range of client’s requirements, ie. it must be appropriate, relevant, applicable. For example a Philips screwdriver works well on a Philips screw, ie. it is capable, but cannot be used on a screw with a straight slot, ie it is not capable. This seems obvious, but test tools are being bought which are not capable of meeting the purchaser’s expectations. All test methods and test tools need to be capable.
    • Validity. The test must be valid, ie the results achieved must reflect reality, eg no false positives or negatives. The test must produce results which are meaningful. They must be correct. It also should not produce indeterminate results. If it does any of these the test is not valid and needs to be reviewed and modified or discarded. Indeterminate or false results create unnecessary effort. An invalid test is a waste of time and effort.
    • Competency. Testing must be competent, ie performed by competent personnel. Testers must know what they are doing. They must understand the purpose of the software being tested. They must understand its operation and implementation. If additional skills, eg financial, taxation, security, superannuation, electrical safety, are required, they need to be brought on board.

Testers must be able to develop test plans and test cases and understand the limitations of these. They must be able to correctly apply test tools and understand their limitations. This applies regardless of whether in-house or commercial tools are being used. Testers must be able to identify and pursue suspect test results. They must be able to assess the impact of fixes on previous test results. Testers need an enquiring mind which does not readily accept conclusions without supporting evidence.

    • Controllability. Testing must be performed under controlled conditions, ie hardware and software configurations and operating states must be known to the testers and cannot be changed without the knowledge of the testers. Anything which has potential to impact the result must be controlled by the tester. This not only implies the software under test but also the hardware, operating system software, application software, test tools, etc, involved in the testing. Unauthorized changes to hardware or software must be excluded. Without control the test result cannot be treated as reliable. If the conditions under which testing is done are unknown the test results must be treated, at least, with care. When performing “live” testing, eg on the internet or over a network, it may not be possible to control the load on the system and this is something which would need to be considered when reviewing test results.
    • Chain of evidence. Testing must be documented, ie the actual methods and test cases applied and the test results together with a full record of the hardware, software configurations and conditions under which the test was performed. There must be a record of the requirements of the software under test, the agreed test plan, what was tested, what tests were performed, how the tests were validated, the hardware and software configurations used, the test results and the criteria used to decide pass/fail conclusions, and the test personnel. Without this it cannot be demonstrated that effective testing was performed and there is no way that anyone can repeat the tests. Also if there is any legal challenge such evidence strengthens the tester’s case.
    • Repeatability and reproducibility. A test is repeatable if the results obtained when a repeat test is performed under identical conditions are consistent with the original results. A test is reproducible if the results obtained when a repeat test is performed by another tester or laboratory are

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.