it was found that on the day of closing, the defect had gone through a number of hands. Following through with the parties in question led to the discovery that a wrong assumption had closed this and other defects incorrectly. The defects were reinstated. On another occasion, we tracked another group of defects that had been allocated to an incorrect party. Again, studying the history enabled us to rectify incorrect details and feed lost defects back into the workflow.
Defect Fields and Formats
Prior to taking a holiday, I prepared a handover note for the defect management process. During the preparation, I realized how complex the defect management system was becoming. The number of different defect statuses should have given this away. There were seventeen. Seventeen was OK but was becoming unwieldy and difficult for other testers to comprehend. There were also twenty fields for testers to fill in. A common reason we cited for why testing took much longer than expected was because having to raise and detail so many defects was an onerous task. However, it didn't occur to me until this point that this was partly of our own making and not just a result of defect numbers.
I resolved to try and reduce this complexity by streamlining the process, eliminating a number of the mandatory fields, reducing the number of possible field values, and removing some of the statuses. This had a small but important effect on the speed of the defect process. Though we accepted that capturing the correct data up front was time consuming, it did mean less time was spent in the number of informal queries returning from developers.
I asked myself the following questions: Was the tool too complicated to use? Would someone else be able to follow the system intuitively or would it require training or copious documentation ?
The answers to these questions told me something about the likely efficiency of the testing effort-that it wasn't as efficient as it could have been.
What new fields had been added? Why ?
I'd been asked to add a field for retest failures. Was this because someone felt that the software was likely to need many patches? Or someone didn't trust that any given fix would work the first time? Was there a development lifecycle problem that needed to be addressed? Was it a problem with the ability of the developers? In the end it transpired that there was a political agenda, which was useful to know.
Which fields were being used? Did this indicate that the testers were focusing on the right things ?
Filling in the business severity more accurately than the testing priority indicated that testers were focused on business rather than testing impact (not surprising since many were business second-ees). Therefore, their focus was somewhat too narrow for the role they were supposed to be playing. A second interesting example was the functional area field. Some testers always put the same value in this field rather than using it to describe the function in which a defect had been found. Discussions with these testers determined that they did not understand how the area that they were testing fit in to the overall test effort. Consequently, we spent some time educating these testers.
We discovered a useful piece of information when we asked why the test reference field was rarely being used. It turned out the majority of the defects were found using an exploratory approach rather than following scripts. This in turn indicated that there was a problem with the quality of the supplier testing. The supplier was