"We have world-class developers, but our testers are second class." I hate hearing that. Too often, it’s true—and it's not the testers' fault.
How did things get this way?
I've been in the software industry for over twenty-five years. I can remember when we didn't even have testers. If we were lucky, we had systems engineers or requirements analysts, but for the most part, projects had developers and project managers.
That worked—until the systems we developed became too complex for project teams to verify by themselves. We realized we needed people who weren't caught up in how the system was developed, people with independent thoughts —testers.
Info to Go
Testing a complex system is just as difficult as creating a complex system. Just creating unit tests that test each path or object individually is not sufficient testing for a complex system. Sometimes we also need product experts to test the system. Sometimes we need people who understand the design of the system, even if they don't have any coding background. Sometimes we need fabulous exploratory testers, people who want to see how they can break the software. And sometimes we need testers who can develop maintainable automated tests—the larger the system, the more likely the system will hang around for more years than anyone can believe, and well-designed, well-developed automated regression tests that don't require much maintenance can save you money.
Today, even moderately-sized systems are technically more complex than the systems we developed even ten years ago. The more complex the system, the more difficult it is to develop—and to test. It stands to reason then that the people we hire for development and testing need to understand these complex systems.
Different types of developers have different skills. Expert GUI developers have different technical skills than expert network developers—who have different skills than expert architects or expert database designers. We may lump them all together as "developers," but most organizations separate the developers by function. However, only the most sophisticated organizations associate specific, appropriately skilled testers with each development function.
If you don't associate testers with each development function, or you think that testers exist only to find defects, you’re not receiving the full potential value of your testers. And the new testers you hire won't be able to keep up with the developers. You'll have a second-class test group.
The problem with second class testers
So, what's the problem with second-class testers? Isn't that just the way of the world? After all, what's testing except a place to put developers who can’t quite cut it, or not-so-technical people who can't make it in marketing, or some other place to put people who don't quite fit, but we're sure we need to have?
Brpht. Wrong-o. Testers fulfill an important and critical role in product development:
Testers provide information about the product under development and test.
Testers have at least three unique sets of customers: the developers, the product's users, and the people who make release decisions. To the developers, testers provide information about where they were confused, or how easy it was to test a particular product area, or what doesn't work about the product—feedback about the developers’ efforts. To the product’s users, they supply information via a support group that understands the problems in the product. To management, they provide information relevant to assessing the risk of release. To serve all of the testers' customers, you must have first-class testers.
First-class testers are sufficiently creative to assess the design and architecture of the system before the code is written. While the code is under construction, first-class testers design and implement their testing harnesses, both automated and manual, creating tests that stress the system in ways the developers do not expect. First-class testers can measure what they’ve tested, assess the risk of what they’ve tested, and know if they've tested enough of the system to expose the risks of product release.
So how do you create a first-class test group? We don’t live in a perfect world, but one way to improve your test group's skill is to imagine what a test group would look like in a perfect world, and then to hire or train against those requirements.
What would first-class testers do?
In this perfect world, what would your testers do? And don’t tell me test! Testers don't just test the product; testers can fulfill numerous other roles on the project. I've used testers as project coordinators, as technical review readers and moderators, as design team members, and as testware developers.
When I staff a test group, I hire a majority of people who can acquire solution-space expertise—deep knowledge about the architecture and design of the product—and who can use a variety of test techniques, including automation. Developers can't adequately test their own code. As Robert Glass writes in Facts and Fallacies of Software Engineering, even when developers think they have "fully" tested their code, only 55%–60% of the logic has been tested. Plus, Capers Jones asserts in his Assessment and Control of Software Risks that systematic black box testing tests less than 30% of the code. Even if you don't believe these statistics, we can all agree that even with "full" developer and black box testing, there's a ton of code that we don't know anything about—and we know nothing about the code the developers forgot to insert (missing logic).
Think about your product. What sorts of testing does your product require? If you're not familiar with the internal design of your product, ask the developers. I won't hire testers who don't already know about or have the ability to learn about these kinds of testing: boundary condition testing, equivalence partitioning, combinatorial testing, exploratory testing, and testing the product from start to end, not just testing requirement-by-requirement. I can train people on test techniques if they have the ability to understand the product design or to look into the code and read it. I can't train people on the kinds of test techniques I want them to perform if they don't have those abilities. In his Software Development article "Recruiting Software Testers," Cem Kaner suggests that product experts are a valid component of your test group. For many complex products, he's correct. Unfortunately, I've met many test teams composed of only expert product users (which is not Kaner’s intent)—and those teams are second class. Expert users who test in conjunction with more technical testers can be extremely effective. Expert users alone, or even worse, manual black box testers who don’t know how the system is developed or how the users use the system, are insufficient to test the product.
On an insurance system I worked on a few years ago, the testers were testing completely from the GUI, even though it would have been faster, easier, and cheaper to check the answers if they'd tested primarily from the product's API. The developers hadn't created a formal API because the testers didn't know enough about testing to ask for one. The testers didn't understand the architecture of the product, and they couldn't read or write code. The testers had no internal system expertise, although they did understand most of the ins and outs of the insurance system. Unfortunately, the testers were not expert users either—the field people regularly called in explaining why something that had worked in the previous release was now broken. The testers' inability to create tests quickly enough to provide feedback to the developers was part of the reason that the group's projects were late and that the customers found many defects.
You could argue that the developers should have created the API because it's a reasonable development practice, and I would agree with you. But if the organization had hired appropriately technical testers, they would have insisted on the API early in the product's lifetime, and the developers would have been happy to define an API for faster testing feedback.
Expert users weren't necessary to test this insurance system. In fact, having smart people who thought they were expert users actually slowed the testing down since they didn't understand anything about the product aside from a superficial knowledge of insurance. Without testing expertise, the testing took longer and found fewer problems.
Does that mean you don't need expert user testers at all? No. In fact, a colleague of mine who tests ultrasound equipment still needs radiologists to test changes to the software. The radiologists verify that the software's interpretation of the images is correct. That doesn’t mean radiologists are his only testers; he uses a variety of other testers. However, he needs the radiologists to gather enough information about the image interpretation so that the company can manage the risk of not knowing if the changed image interpretation is correct. The radiologists supply useful information in a limited area. The other testers provide more general systems information.
Contrast the insurance system with the ultrasound system. Because image interpretation is a part of the product, expert users are needed for sufficient testing. The system is too complex to list all the test cases and run through them because the system requires an infinite number of test cases to test "completely." No one can completely test the ultrasound system, but expert users can help reduce the risk of releasing with defects obvious to expert users.
What would first-class testers know?
In a perfect world, what knowledge would your testers have? Outwardly, the insurance system testers and ultrasound testers look as if they're performing the same work: planning the testing, writing the tests, and running the tests. However, the knowledge they require and use is different.
I assess four areas of technical expertise when evaluating technical knowledge. First, I look at functional knowledge. Functional knowledge is how well the person knows the technical part of his job. For example, how well does the tester know how to test (which test techniques), or how well does the developer know how to develop software (such as design and debugging techniques)?
Next I look at product domain expertise: how well the person applies his functional knowledge to the product, how well he has learned other products in the past, and if the person can explain how he chose which techniques to use when, depending on the product. I also look for ideas about how quickly the person can learn about the product (solution-space).
The third area is tool/technology experience: how well does the person understand the tools used in or available for our environment? People who already have tool or technology experience can be more productive more quickly than people without it. Of the four areas, this area is the easiest to teach someone, assuming the person has the ability to learn the tools or technology. While this is important to discover, I have not found tool or technology experience to be a useful discriminating factor when assessing what my staff knows.
Finally there is industry expertise: how well the person understands what the customers of this particular industry expect, and how well he can articulate that knowledge and apply it to his functional work. For instance, does the person understand the problems of the customer (problem-space)?
The larger or more complex products require testers with more knowledge of different test techniques and testers who can learn how the product works so that they can apply those techniques. Depending on your industry, you may need testers who understand the industry. And, depending on how much you're willing to automate the testing, the testers may need to understand your internal tools and any test tools you use.
What’s missing on your team?
Once you've determined what your ideal candidate looks like, look at your current test team. Do you need to fill in any gaps? If so, how do you judge abilities in an interview setting? Aside from behavior-description interviewing (questions that help you understand how a candidate has performed in previous jobs), if you want testers who can develop testware, give them the same audition that you give developers. (If you're not using auditions in your interviews, you're missing out on a tremendous technique for assessing the candidate's adaptability to your environment. For more on interviewing techniques.
If you aren't looking for people who can write code, but who can read code and then apply different testing techniques, create an audition using your code, and ask them to discuss the testing techniques they would choose and how to apply them to the code.
Or you may be looking for people who have significant test technique knowledge, people who can understand the product's architecture and design and who can choose and apply appropriate test techniques. You can ask those candidates to sit in on or lead a design discussion, and then ask how they would test the product, based on the design.
If you're interested in risk assessment, ask the tester to discuss what some of her concerns were on a past project and what she did about them. Show the tester some of your problems and ask how she would explain the risk of release to a developer or project manager.
Not all the testers will be appropriate for all testing jobs. Some of them will know more about the product internals, some will know more about the industry. You will more than likely have to decide which skills you need the most and make some tradeoffs.
First-class testers do more than find and report defects; they supply information about the product to the entire organization. Sometimes that information includes test results, defect reports, or data about the system's performance. Sometimes the information is feedback about requirements or design. The more information your testers provide to the developers, the requirements people, the writers, and anyone else involved in product development, the more valuable they are. Properly done, testing will reduce your cost to market, the risks of releasing with outrageous defects, and the cost of ongoing maintenance.
If you're already managing a test group, think about where your testers are successful and where they are not successful. Supplement the staff you have with other talent when it's time for you to hire, and make sure you have auditions or other interviewing techniques to hire the kinds of testers you need.
Testers don't have to be second class in your organization. Hire and train your staff appropriately so that they fulfill the needs the organization requires.
The Tester/Developer Relationship
Aside from providing information about the product under test, testers interact with the developers, altering the way the developers create the product.
You might think that because developers want to be proud of their products, they would look for problems early and fix them early. Many of the developers I've met do. However, the developers are not testers. Developers usually can't see their own defects, so they can"t detect all the problems in their work products. And the more complex and bigger the system, the less likely that the developers will see their defects.
Are Your Testers Second Class?
|Are your testers routinely excluded from requirements or design meetings?|
|Do your testers have to resort to eavesdropping to hear information about the product?|
|Are your testers requests for tools postponed or ignored?|
|Are product testability requirements postponed or ignored?|
|Are the testers "per-person training budgets significantly less than the developers" budgets?|
|Are all your testers interchangeable; i.e., are they equally equipped to work on all of your products?|
|Do your testers work with developers on the code only after the product is built, either because they're not brought into the project early enough to work with requirements and design, or because they don't know enough about requirements and design to supply feedback?|
If you answered yes to even half of these questions, your testers are second class. They are excluded from key discussions and prevented from obtaining the tools and product expertise they need to do their jobs. Oh, you don't do it intentionally. Usually, the testers don't know enough about the technical side of testing, don't have the knowledge or skills to test the product adequately, and management is afraid to "waste" money hiring people who have other expertise because the managers can't perceive an adequate return on their investment.