Three years ago, I embarked on a career as a software tester with a company involved in control engineering. After just one year of testing, I was promoted to leader of the test group. I was reminded of that experience as I read Pam Hardy's article, "Perspectives from a New Software Tester," in the March/April 2000 issue of STQE-not only was I new to testing, but I suddenly found myself responsible for managing other testers.
I had some background experience to fall back on, though. Prior to becoming a full-time tester, I had worked for a company in which I wore several generalist hats-developing, testing, and installing control systems, as well as providing technical support to the systems I installed. Armed with this experience, and having seen how customers used the software, I became a thorough and sadistic tester at my new company-pushing the software's functional limits by applying what I'd seen users do out in the real world. After a year of hard work in testing, I was put in charge of the test group.
As the new test manager I faced several challenges. I was fortunate that much of the testing process was already in place; but since test reporting was done manually, I had to spend much of my time making certain that testing was actually completed. Working side by side with the testers, I quickly realized we had a problem-each tester had his or her own definitions of what severity to apply to a defect, and how to verify a defect.
This disparity of definitions made documentation-and answering questions-difficult. Project leaders, for example, were always asking how much time the staff would need to test out this or that feature, so they could calculate a budget for test time. And there were always questions as to why a certain area was slated for testing. Then, as testing began, project leaders would regularly ask how far along testing was, or when I expected it to be completed.
The overriding challenge, however, seemed to be the us-versus-them tension between the developers and the testers, exacerbating the gap between Development and my test groups.
It became obvious to me that there were four keys to getting better organized:
Let's talk about each of these components in turn.
Common Set of Ground Rules
In my days as a tester, I had implemented my own processes to help me better organize my testing; I now realized that all of the testers in my group also had mechanisms of their own.
Some testers would copy the test cases directly to the test logs, while others would summarize them-which made it hard to tell if that exact test case had been tested. When defects were found, the Defect ID from the defect tracking system would not always get entered into the test logs-so although it appeared that test logs were passing, defects would still be found in those areas.
I knew I had to come up with a technique, but I also knew it had to adhere to the "kiss" (keep it simple, stupid) principle. We did not need another overly bureaucratic process to slow us down, since there was a belief in the organization that Testing was the bottleneck of product deliverables. There was so much to test, and not enough testers or enough time.
What would work? My test group seemed to be open to anything as long as it did not take too much time and did not impede the testing activity. And I knew that without buy-in from my group of testers, the new ideas I implemented would be doomed to failure. So I asked my test group to come up with ideas.
All the ideas were consolidated into a "Tester Users Guide," and shared with the project leaders. The guide clarified the testing process so that when we tested-in spite of our different personal styles and habits-we had a common set of ground rules to document our testing. This guide covered:
We still find this guide useful in our organization, as a way to decrease the ambiguities involved in our testing process. Since it contains answers to all the typical questions asked by new testers, it's a good training guide. It walks a tester through our test groups' process-from getting assigned the task and entering it in their time sheets, to locating the test log on the network and beginning testing. It also helps standardize what to do when they find a defect, and what to do to verify it.
Our guide stresses the use of test logs to document what tests have been completed-formalizing testing through the completion of predefined test logs with test cases. If the test case passes, it's marked as such. If a defect is found, it's recorded in a defect tracking system.
The process we use for entering defects is a closed loop/corrective action operation-meaning the defect entered can be closed only by the tester who submitted it, and only after it has been verified. A tester submits a defect and rates the severity of the defect as either a Showstopper, No Workaround, Workaround, Spelling/Format, or a Suggestion-all severity categories defined in our Users Guide. The defect coordinator then prioritizes the defect, based on the tester's description, as one of three classes:
The prioritized defect is assigned to a developer to fix. The developer fixes the defect, and returns it to the tester to verify. In this model, the tester is ultimately responsible for verifying that the defect has been fixed.
The next problem the guide addresses is how the testers deal with the different defect resolutions. There are various resolutions such as Fixed, Not a Defect, Not Repeatable, Not Implemented, Do Not Fix, and Transferred. Steps are outlined for what the testers should do in the event of each of these resolutions-ensuring that all our testers follow the same steps for all the defects resolved.
Our guide also suggests the clearest ways in which to describe the defect, and reminds our testers to include any associated files, pictures, or steps that might help the developer reproduce the problem and fix it. We follow the same bug report scenarios as Rex Black recommends for grounding test status (see "Effective Test Status Reporting" in the March/April 2000 issue of STQE), except that we don't require getting others to review the defect before submitting it.
How Is Your Testing Going?
"How's it going?" is a question test managers field several times a day, and you can choose to answer it as vaguely as you want.
But what do you say when Management's questions are more specific? "What percent of your testing is completed, and how much is failing?" Or "When do you expect to be finished with testing?" If Management is asking these questions, your answers had better be phrased in the language they understand: quantifiable numbers.
In order to accomplish this, I first quantify the testing by the number of given test cases provided prior to the start of testing (as testers add test cases during the process, those cases will also be tracked). I measure this day-by-day and over time. Testers are instructed to update their test logs daily, so we can tally the number of test cases passed, failed, and waiting for test. This is all summarized in what I call a consolidated log of the entire project, which compiles information from each tester and summarizes the progress reported by each test log.
From the consolidation of these logs, and the associated forecasts, I can give an answer to "How is your testing going?"-an answer that's sometimes surprising. From the information shown in Figure 1,for example, it appears that we're executing our test plan ahead of schedule; but our total passing seems to be falling so far behind that it will be completed nearly a week past the execution end date.
In order to achieve these tracking results, we had to modify two parts of the process: how we implemented the test logs and how we executed testing.
In the past, our testing documentation process used simple Microsoft Excel worksheets to record the test case pass or failure Figure 2. The test cases were copied from the test plan into the Excel sheet's Test Objective column. If the test case passed, the tester put "pass" in the Test Result column. If the test case failed, a defect was entered into the defect tracking system-and assigned a strategy number (S#) that related to our test cases and requirements, a tracking number (Defect #), and a Severity. Once the defect was resolved and verified by the tester, a "Y" for YES was entered in the Verified column.
This relatively simple technology meant extra work for me! I had to manually go through all these test logs-verifying first that all the test cases were executed, and then that all defects found were resolved and verified both in the defect tracking database and in the test log. A typical testing phase would contain at least twenty to twenty-five test logs, with as many as a hundred test cases inside each log-meaning late nights for the test manager.
However good our intentions for these logs were, they were hard to maintain because they weren't used consistently. And it was hard to tell what was done, since each tester had their own way of documenting their results.
Even when used religiously, the Excel worksheet had several drawbacks. It didn't lend itself well to documenting those "above and beyond" test cases-the cases outside standard test cases in which the important defects are usually found.
If my goal was to be able to easily convey how our team's testing was going, these unmanageable test logs were clearly not the best way to get there.
I decided to program our test logs using Visual Basic for Applications (VBA). This new format Figure 3 allowed the VBA code to verify each entry, color-code each row based on the defect's severity, and validate the entries for fixed defects. Defects with a severity of Showstopper or No Workaround would be coded an urgent red color. Workarounds were yellow, Spelling/Format were blue, and Suggestions were an even less-urgent green. Testers or test coordinators reviewing the test logs could now easily distinguish the severity of the defects at a glance.
Once the defect is fixed, the tester is instructed to verify the defect and enter a "Y" for YES in the Verified column. Completing this triggers the row color to revert to the default color. The tester also enters the number of the software release build in which the defect was verified.
Of course, some defects do not get fixed:
Each tester is, as I mentioned earlier, responsible for testing "above and beyond" the standard test cases presented to them. These "tester input" test cases are entered in a special section of the test log called "Other Things Tried" Figure 4. After evaluation by the defect coordinator, the three defects in this example were tagged as moved to another project (MP), lowered in priority (LP), or not to be fixed (DNF). Defects assigned to these categories remain open in the defect tracking system, but are passed on to the next project-and the history of the defects is recorded in both the test log and the tracking system.
When this log is saved and closed, the test log validates its entries, and updates statistics such as the number of test cases added/passed/failed/verified. And the information is long-term; when testing is complete, and each defect fixed and verified, the history of all the defects remains in our records, stored on the network's project directory.
This new reporting format also improved the functionality of the consolidated logs mentioned earlier Figure 5. By organizing test areas from each test log as tabs or worksheets in Excel, we can now move quickly between the copies of individual logs-able to see at a glance which test logs were updated, how many test cases were add6d/passed/failed, and all of our defects' severities.
Color coding helps us here, just as in the individual logs; any failed test cases are marked in red, as seen in area 2 and area 3 of Figure 5. A test log that is completely tested and passed would show up as green. Test logs with no severe defects are indicated by the Pass/Conditional (P/C) condition or yellow color, as seen in area 1 of Figure 5. When your testing is completed, all the rows should show green, indicating that all of the test cases inside the test log have been executed and passed.
Below the test log listings in the consolidated sheet is an inset box summarizing all the testing: percent complete, percent passed, and percent failed. We pay special attention to the number of test cases added during a test phase, indicating these "above and beyond" tests in the Added Strategies column. (In Figure 5's test example there were 48 original test cases, with 11 "above and beyond" test cases, for a total of 59.)
This consolidation process is performed automatically, and the results are emailed to me. It's now my job as test manager to review this summary and provide feedback to my testers and peers. With this information, we predict completion dates using the trend function in Excel to produce a chart summarizing the progress-which we publish on our intranet. The final step of the consolidation process is to query the defects found in the test logs against the defect tracking system. The defect tracking system returns information about the defects, labeling each one as fixed, lowered priority, or moved to another project.
How's our testing going? This method makes managing the testing activities with test logs easier, and helps me answer that question from Management with more confidence.
What Do I Need to Test?
Software products continue to evolve, driven by customer requests and market pressures; new features, new functionality, and new operating environments mean your testing plans have to keep up with thousands of changes. What areas do I need to test? How much time do I have to properly test the product to assure quality? Will Development deliver the software in time so that I have ample time to test it?
I've found that acknowledging your resource limits is one of the most difficult parts of being a test leader. Sure, you want to be able to test it all, and feel as if you're releasing a bug-free piece of software. But there is a timeline to follow and products that must be shipped. In the face of that reality, we do risk analysis to determine what areas we can safely say are okay not to test.
Too little time to test newly added features-that's often the overriding factor in making tough test decisions. I ran into a sticky situation a few months after I took over as a leader: I had to meet a commitment to a customer to get a product tested and ready for use in a painfully short amount of time. Had we just applied past techniques for testing everything, I would never have made it through testing. As test leader, I had to weigh several factors. The product was a mature application, and had been thoroughly tested in prior releases. This release was adding only a few new features. I had a customer commitment to meet. The best option seemed to be to do a risk analysis on all the areas, and do thorough testing only on those that were at most risk.
Consulting with the development team is crucial to understanding what areas are going to be modified or enhanced. If the area is new, a functional requirement specification (FRS) is composed by the developer. This FRS is inspected by Marketing, Development, technical leads, and the test leader. A formal inspection log summarizes all the problems or suggestions with the FRS. Once everyone signs off on all the changes to the FRS, development begins.
It's from these requirements that our test cases are derived; I have found that straying from these sets of requirements can lead to development slips, because enhancements continue to be added.
Part of writing the test cases for the next phase of testing is our evaluation of all the customer problems that have come in since the last release. Test cases are either rewritten or enhanced to include these problems, and this becomes part of the regression testing effort necessary to verify that all problems are fixed.
Finally, with all the test cases written, I compose a test plan. This plan is communicated to the project leaders and technical developers, letting them know which areas the test group will test. This way, the project leaders and developers know what's going to be tested-and can add questions or comments about the plan-before we actually begin testing.
Maintain Good Communication
Automation can work wonders; but even with perfectly generated reports, you still need a real human being to properly convey the resulting information to the rest of the team. As the test coordinator, your job is, as Rex Black has said, to communicate this information properly so that it is understood-even if it is not good news.
How do you set up a good communication environment? Here we've found that working proactively with the technical developers, side by side, helps us avoid the us-versus-them scenario. It also helps to get together as a team with other technical leaders to determine what features we can afford to add to the next release-factoring in the amount of time needed to both develop the feature and test it.
Working together like this makes all the difference in getting the product out the door on time. Listening to Development helps me stay on top of changes, and what areas should be tested. The test team can then concentrate on high-risk areas of change.
As a test manager, I feel I have to be an information resource for my team both on the product itself and in testing techniques that might point out possible problem areas-showing my testers, as James Bach writes, where to shine their flashlights. The defects submitted are automatically emailed to project leaders. I'm on that "recipients" email list as well, making it easy to forward the email back to the tester and ask for clarification, or explain to the tester why it's not a defect, or suggest what to check into further.
When testing begins and defects roll in, we can quickly evaluate what areas are failing. If testing is off to a rough start, it's my job to be proactive. Sometimes we have to stop formal testing, and I use my influence as a test manager to take us into integration testing-one-on-one, the tester and developer working together until that area is more stable.
Another advantage of automating the email of defects from the defect tracking program is that we can quickly ascertain the severity of the defect. That means I can comment back to the tester, or inform the defect coordinator that it needs to be fixed since it's halting testing in that area. Making that call is sometimes subjective. As a test leader, I have to stand in front of defects that are real problems, but also be able to facilitate a discussion about the trade-off and make a decision to let some defects wait for the next release.
As we near test completion and the targeted ship date, we hold one or more defect review meetings to determine which defects still need to be fixed, and which defects we can safely push forward with. Sometimes we conduct a risk assessment on those remaining defects that we think might hold up the release of the product.
As test leaders, it's our responsibility to get a handle on the problems in our test process, and to determine what goals we need to work on to get the group up and running.
These big-picture tasks aren't easy; there are always small-picture deadlines to be met at the same time. Empower your testers to come up with the changes that would make the testing more efficient; testers know their jobs and usually have good ideas. Then come to an agreement on the ideas, develop a guideline for the group, and make sure it's followed.
Find or develop the tools you need to plan, estimate, and forecast so you can identify efficiencies and areas for improvement. Tracking these resource issues, as well as documenting the overall test results, is going to be important to Management (who, after all, believes you cannot manage what you cannot measure).
To determine what to test, start out with product requirements as well as the enhancements added to the product. Once you've developed your plan, pass it out to the developers so they are also aware of what's getting tested. Encourage your testers to assist in the test plan efforts. Collect data on past defects to support your testing efforts. And use customer input to increase the usability of the product. Remember that clear and open communication is an essential part of this entire process. It not only helps test managers illustrate the progress of testing, but it also fosters teamwork between your testers and the developers across the hall as they work toward what has to be a common goal-to deliver quality software products.
Making sure that happens is now part of your job!