to defend her strategy, the developers eventually won. The next release went out much faster, and everyone congratulated himself for overthrowing an oppressive regime. The QA manager returned to find out she had been supplanted and was repositioned in a toothless role as the owner of the process, but had no power to enforce it. She left soon afterwards.
Within months, serious defects began to appear in the field. In fact, one defect was so serious that it cost the company contractual penalties and attracted the senior management's attention. When they asked the obvious question--how could this happen?--no one could answer. The supporting documentation--requirements, test plans, test results--no longer existed since it took too long to produce and wasn't needed anyway.
It seems to me that the safest bet is to measure the number and priority of verified requirements. This has three key benefits:
- The focus shifts to requirements, which moves the effort earlier in the development cycle where it belongs.
- It reveals the inventory of functionality that the system contains. Most managers don't really grasp the enormity of what QA is up against in trying to provide comprehensive coverage.
- Requirements may be reduced if schedules need to be sacrificed. Related risks can be managed instead of just blindly cutting corners or letting serious defects go.
Do your teams' measurements for success highlight failure? How does your team strive for success? What works for you?