How one tester learned the hard way that quality is in the eye of the pocketbook holder.
The Team Was Preparing To Test a maintenance release of a fairly simple client-server application. The application combined multimedia training, online paperwork, and a few tricky aspects, like calendar/appointment logic, progresstracking, and extremely arcane reports. The release contained few functional changes, so we focused on upgrading the operating system, migrating all code to a more up-to-date language, and automating the installation. Our official mandate for testing was to ensure that all the functionality stayed the same as the previous version, that the new installation process went smoothly, and that the new reports were accurate.
I gravitated toward testing the more difficult aspects of the system: the newest and the most complicated pieces. The Scheduler, in particular, had complex logic and a lot of edge cases, while the Reports had a lot of very difficult rules and required careful data setup. The Installer was still under construction during the test process. I determined that these pieces deserved the lion’s share of time and attention.
Testing the large multimedia training portion, which consisted of several hours of short movies, self-paced slide shows, and interactive video recordings, could wait. In our experience, testing these was easy but tedious. We had seen the videos a hundred times, so the less we did with that, I thought, the better.
No one could disagree that the things I proposed to test were the most important. Or so I thought. Secure in my conclusions and oblivious to my biases, I blithely compiled my list of priorities, drew up my tests, and started testing.
Can't See the Forest for the Trees
This priority system worked well at first. We spent a lot of time on the complex aspects of the system, and we found our share of bugs. All the effort we had put into making test materials for testing the complicated logic of the Scheduler and the Reports paid off. We were testing Reports thoroughly and quickly. We had a matrix of test conditions for the Scheduler that the team posted on the wall and filled in steadily. We were cruising.
After testing everything else for a while, we took a pass through the multimedia portion of the system. We noticed that the graphics and fonts did not look quite the way they had in the old system. I thought, "Well, that's not great, but there are many more important things to worry about." I noted the changes in a bug report, marked it as a "normal severity" level, and forgot about it—at least, until the customer saw the pre-beta demo.
The Font Hits the Fan
The customer sent three of its leaders to view a demo of the product a couple of weeks before it was to go to the field for beta testing. We showed them the Reports, how lovely and precise they were. The client reps smiled. We showed them the complicated Scheduler, how seamlessly it worked to mimic the old system. The client reps beamed. Then, they asked us to show them the slide shows. The client reps stopped smiling. As more slides went up, their faces turned various shades of livid, and they became more and more outraged that the multimedia presentation looked slightly different from the original. I couldn't believe how much anger they felt over the appearance of the fonts in the slide shows. We pointed out that a lot of work had gone into the "invisible" stuff, and it worked fine, but they didn't want to hear it.
I came away from the presentation confused. Clearly the client’s priorities were wrong. They took for granted that the most difficult
|(Not So) Trivial Pursuits||168.81 KB|