Most testers are committed to helping produce better software. That's a good thing. But when a tester takes on the role of "quality police," good intentions can turn ugly. The quality police don't just report the bugs. They appoint themselves judge and jury, ready to dispense justice according to their own convictions of what programmers should be doing. And the project is likely to suffer for it.
Software testers often fall into the role of being the quality police. They enforce quality standards, identify programmers who are not following procedures, and do what they can to punish programmers whom they feel are producing inferior work. My view is that the quality police role traps testers into being less effective as testers. And it's more likely to undermine the project by discouraging communication, reducing trust, and causing delays.
A software tester's job is to test software, find bugs, and report them so that they can be fixed. An effective software tester focuses on the software product itself and gathers empirical information regarding what it does and doesn't do. This is a big job all by itself. The challenge is to provide accurate, comprehensive, and timely information, so managers can make informed decisions.
However, many testers take on additional "responsibilities." They harangue programmers for shoddy work or for not following proper procedures. Or they try to mandate how programmers should operate. Or they snipe at the design instead of finding bugs. These testers may refuse to test builds that don't have sufficient documentation or refuse to research bugs that shouldn't have been there in the first place. They think that programmers require discipline and are determined to give it to them. I call these people the quality police.
Some testers adopt the attitudes of the quality police on their own initiative. Others do so at the prompting of their managers or the advice of authors and consultants. Let's look at some of the beliefs that can lead to trouble.
"Testing is quality assurance."
Like most testing groups, yours may be titled "quality assurance." Naturally enough, this may lead you to think that you are responsible for the quality of the software. So you need to do whatever is necessary to ensure that the product is high quality. Don't let the programmers get away with practices that risk introducing bugs. Don't let them cut corners or avoid "best practices."
But this is unreasonable. You can't be responsible for the work of other people you don't manage. At its worst, this sets up a dynamic where the testers are the "quality assurance" group, intent on avoiding bugs regardless of schedule, and the programmers are the "schedule assurance" group, intent on meeting a date regardless of quality. It's a recipe for disaster.
I advise testers to avoid the "quality assurance" label. And I advise managers not to expect their testers to "assure quality." If managers truly want an independent group to audit the work of the programmers, this group should be separate from testing. (It might also audit the tester's work.) Let the testers focus on the product, not the people. And let everyone take responsibility for the quality of their own work.
"Programmers need discipline."
If you're seeing lots of bugs, it's obvious that the programmers could do a better job. They should be unit testing, they should be holding formal code inspections, they should be defining requirements up front, they should be better managed-there's a long list of development practices that they may not be doing or not doing correctly. It's clear to you that the programmers don't know the right way to do their job, or they know, but are taking irresponsible shortcuts anyway. As the quality advocate, you have to do something about it.
It's easy to think this way. It's particularly easy when you've seen your pet development practice used elsewhere with good effect. Indeed, it's easy to get sanctimonious. Why won't they do things right?
If you are convinced that you know how to