An Ever-Changing Complex Report
Sometimes we might describe the news from testing as "good." Sometimes we might describe it as "bad." Oftentimes, it's a mixture of both. But one adjective we can almost always apply is "changing."
Test findings during test execution evolve constantly. If I spend two hours preparing slides for a two-hour project status meeting, I know that, by the end of the meeting, my report is stale. On the other hand, if I slap my presentation together quickly, I'll probably walk into the meeting with a less-than-perfect snapshot of test status. There's a trade-off between accuracy and timeliness, between being up-to-the-minute and having my story straight. The context of the presentation revolves around the information needs of the audience.
Does the audience worry about the exact count of test cases, the number of passes and fails, and the like? If so, I must work extra hard to make sure all my reports are consistent, but this means that my results are somewhat out-of-date by the time people get them. Does the audience want the latest story, the breaking news, even if all the details aren't clear? If so, I must work to make the results-reporting process fast, albeit at the cost of accuracy from time to time.
To get the balance right, I find I have to involve my managers and other stakeholders in the discussion. Once I explain the time required to achieve particular levels of accuracy and consistency in particular reporting formats, and the trade-offs involved in my being disengaged from the testing-process-proper while working on such reports, I can usually get the guidance I need to make the right decision.
Perhaps the proper word is "decisions." Each member of the audience may have slightly different expectations. Again, being an effective communicator requires that I target my communication to the listener. Settings in which I report test status to a group require that I aim for a happy medium for all the listeners; however, I can also produce customized information for individual stakeholders. These customized reports can satisfy the listener's specific preferences in balancing timeliness and accuracy.
Even with all the time in the world, and no matter how carefully I prepare myself, I can't know everything. For example, I can't speak from memory in detail about each and every one of three or four hundred active bug reports. Likewise, I can't answer specific questions about an obscure condition buried in one of two or three hundred test cases. When confronted with such questions, I smile and say something like, "Well, you win the game of 'Stump the Dummy!' I don't know the answer to that, but I'll research the issue right away if you'd like."
As long as I am usually well prepared, a forthright admission of ignorance coupled with an unstinting offer to get the information immediately wins out every time over winging it or dodging the question. However, I must be ready to discuss in detail the serious bugs, the most serious test case failures, and the most dangerous risks. Astute managers often ask me fine-grained questions about situations involving data loss, complete functional incapacitation, nasty performance bottlenecks, or impairment of a crown-jewel feature.
Maintaining a Passion for Quality While Giving a Dispassionate Presentation
As test professionals, we tend to focus on the problems, the risks, and the downside. That's appropriate, as that's often the role of the test team: To help manage risks to system quality. As professionals, it makes sense that we would bring a passion to our work. However, that passion can spill over into a Don Quixote type of behavior, fighting nonpriority or unproductive battles. This gets in the way of effective communication. Our goal, then, is to maintain an admirable passion for quality without becoming a project obstacle.
Let's recognize that, just as it's understandable that programmers and other engineers are challenged by bug reports and information about failed test cases, we testers have a natural human tendency to become ego-involved in how the project management team responds to test status. After all, we want our work to be valued. It's important for us to understand that testing does not happen for its own sake. Organizations don't have test groups to satisfy idle intellectual curiosity, but rather to deliver specific quality risk management services to the organization in the context of a specific project. These services help the project management team balance the four elements-quality, features, budget, and schedule-to make a fully informed decision.
Make It Count
The test team counts on you to make the case in support of all the work your teammembers have invested. Pulling together the right information at the right time for the right audience, and picking your battles according to priority, will make everyone happy. Your test team will be bolstered, your project will run more smoothly, and the software product will achieve higher quality. That will make your customers happy too.
Editors Note: This column derives from Rex Black's upcoming book Critical Testing Processes, Addison-Wesley: Boston, 2003. Look for this book to be published summer 2003.
Tell the Right Story
An Ever-Changing Complex Report