One of the most valuable services a QA group provides is preventing failure. Ironically if the group succeeds at this, QA might find themselves unpopular or out of a job. This week's columnist Linda Hayes reveals how typical methods of measuring success can actually cause failure. Especially if success is achieved at the loser's expense.
You may ask, how can success turn into failure? Consider this...
A post-mortem review of the preventive measures taken during the Y2K scare churned a lot of skepticism. Everyone complained about all the time and money wasted on preventing a major failure because it turned out to be no big deal. Excuse me? Doesn't the fact that it wasn't a disaster mean all that effort was well spent?
So, how can QA measure its value without making enemies or being penalized?
As reasonable as it may appear, measuring QA by the number of defects found does not work. For starters, it places QA in an adversarial role with development because every win for QA is a loss for development. I worked on a project that awarded bonus points to development for delivering the least number of defects to QA, and for QA to find the greatest number before release production. Makes sense, right?
Wrong. This approach created a bizarre situation in which development and QA made careers out of parsing exactly what a defect was. Was it an undocumented feature? A missing requirement? User misbehavior? The debates were endless. Developers accused QA of deliberately testing absurd scenarios and performing improper actions just to cause errors, while QA accused developers of denying what were clearly failure modes or omitted functionalities.
Even more insidious was the black market that developed: developers would literally bargain with QA to track defects under the table--off the books--"just between you and me." This created discord within QA when one team member, who was plotting a transfer into development and wanted to curry favor, was keeping a spreadsheet on the side with unreported defects. When the other developers found out about it, they were incensed because it cost them bonus points.
The developers argued that rewarding QA based on how many defects were found motivated QA to spend more time testing in dark, unlikely corners than in mainstream tests which were likely to work. The problem is that users are more apt to spend most of their time in the common areas, so failing to test this area thoroughly invites higher risk failures than revealing errors under extreme and unusual conditions.
The worst outcome is that the defect-hunting mindset diverts QA from its true role: Quality Assurance. Instead, quality control is achieved through testing. Think about it--if you are paid to find problems, what would serve as your motivation to prevent them? You are essentially penalized for investing in the processes--requirements, reviews, inspections, walkthroughs, test plans, etc.--that are designed to nip issues in the bud.
So the logical way to reward QA, it would seem, is to measure defects that escape into production. The fewer the better, right?
Not necessarily. I know of another company that compensated the QA manager in this manner. In turn, she methodically and carefully constructed a software development life cycle aimed at producing the highest quality product possible. Development chafed under what they perceived as an onerous formality and time-consuming processes. Product management also complained about the lengthy test cycles. But she prevailed because the proof was there: product quality improved significantly, with virtually no high priority defects reported in the field.
The manager took six months of maternity leave, and in her absence, developers began to make the case that the entire development process was too burdensome and that testing took way too long. They pointed out that the software was stable, so the elaborate QA edifice was excessive. The same tests had been run for years and always passed--who needed them?
Without the QA manager around