using our test cases to test its software (in a rigid and incorrect fashion) and at no point experimenting or trying to break the system. This turned out to be one of the key discoveries of the project. As a result, two further intermediate stages of testing were inserted, first into the supplier test cycle and then into the user testing (a confidence test immediately after delivery).
Defect Tool and Its Use
As the program moved later into the testing cycles, the use of the tool became more widespread and higher profile. Who was using the tool, when, and why gave me insight into the program dynamics and events.
The program manager would ask me what the latest figures were, and explain to me why he needed to know. By knowing his reason, I was able to add extra information or pinpoint defects as the situation required.
By asking myself who was interested in the information it became possible to identify and build up relations with key players in the program. It became clear that one of the strand coordinators (who reported to the program manager) was really one of the driving forces of the program. The relationship that was fostered was mutually beneficial with the coordinator having readily available information on software quality and the test team having an active and vociferous ally.
The statistics that I provided on a daily basis were simple. My daily report explained how many new defects had been raised, the total number of defects to date, and how many defects existed with our supplier. No one ever asked me for historical data or how many defects had been fixed by our supplier since the start of the program or the average time it took to fix a high-priority bug.
It was just as well. I couldn't have answered any of those questions anyway. I hadn't set up the defect system to capture that kind of information. Once a defect was fixed, I wasn't interested in it anymore. That raised questions in my mind. Should I have planned for the time when the questions would be asked? Was this program simply not interested in carrying forward statistics into future programs? When I questioned this with the program test manager his view was firmly the latter. As a matter of pragmatism this issue dropped into the background. At least I knew two things: Program management didn't need me producing trends and lessons learned (despite my reservations, I accepted that it was too late to change this), and if I did it again, I'd spend more time analyzing the role of the defect management tool and the data it needed to capture rather than letting it develop over time.
The defect management tool and process should be a guide. Statistics and defects are useful, but they are only a window onto the health of the program. As ever, it is the awareness of the human element of any part of the program that makes for the full story. Elements to be aware of include the writing style, the amount of "debate" within the defect descriptions, and how successfully the tool is being used. A successful test manager should not take data and information merely at face value but should use this to inform his view and to ask himself further questions. So when you are looking at statistics from your defect management tool, know that there is more you can understand. The following questions can help you add more to what you already know:
- Do I detect emotion within the defect description?