Luisa Consolini tells us why the managerial side of quality is as important as the technical side. The precepts she imparts are: 1) there is something as bad as not doing testing—not managing it; 2) if you don't manage quality, you won't improve it just by applying some fancy quality techniques; and 3) people are not second to quality.
I used to think so when I was down in the trenches, spending a fair amount of my twelve-hour workday jotting down wonderfully elegant lines of code. "Why aren't they here when I’m working all night long tracking down this damn bug?" I used to say. How can they really think I can test my programs after they've cut my budget in half and (of course) added a bunch of very important last-minute features critical for product sales? Nonsense! They simply don't know our job. We are managed by people who cannot tell the difference between a PC and a microwave oven (they both have windows, don't they?) !
My "techno-thought” was more or less "us and them." It served very well, in my mind, to explain why we were so fond of trying out all the new, powerful software engineering techniques and they simply were not; managers believe in management, and we technical people believe in subtler, sophisticated, deep technology. It was as simple as that. From my perspective, managers got in the way, and we had to give up what was useful for what was urgent.
Then I was subjected to a harsh reality! One bright morning I was put in charge of process improvement—a buffer between management and engineers—and I simply didn't have a clue as to what could really improve our results, except maybe the application of our deep technology. It was then that I discovered some interesting morals that I would like to share with you:
1. There is something as bad as not doing testing: Not managing it. When should you start testing? What are the testing priorities? What should go into the test plan? Who should do the testing and how should they get organized? Who should set up a testing environment and when? How much should it be automated? How are you going to discipline the communication between independent testers and developers? How are you going to manage testing, bug fixing, and development simultaneously? How do you decide that you can promote your product to beta testing or to release status? How do you make sure that the bugs you found have really been fixed in the final version? How do you know if your bug-fixing capability is catching up with your bug-finding rates? How do you know if you need more resources to get the product to an acceptable quality level within the shipping date? And if users are involved in testing, how do you make sure that the right resources will be available when you need them for system testing? Have you planned for keeping interdepartment system testing and problems fixing well in tune?
Wow! There are so many questions. And perhaps they are not even all the ones that come to mind. Well, let me tell you that none of the answers is technical in nature, and if you screw them up it is as dire as having a non-tested product hit the marketplace and working in the software shop that did it!
2. If you don't manage quality, you won't improve it just by applying some fancy quality techniques. Peer reviews, white box testing, boundary analysis, and so on are fine. But you should never forget that you do that for two basic reasons:
- Making customers satisfied and thus making an adequate sum of money for your company (and ultimately—I hope—your purse).
- Avoiding making the same mistakes over and over again so that next time you can tell everybody, "I have improved"; this avoids a sum of money leaving your company as 'non-quality' costs.
Well, it is not