Test Planning in a Fluid Environment


As a test manager, I know the product needs to be released on schedule. I'm trying to stay on schedule, but there are changes in the software. I have to keep my test team apprised of the changes and revise the test plan…again. Now it's time to plan for the next test cycle. This article offers four keys to a successful test plan: Involvement of Test Team from the Beginning, Integration Testing, Identification of Handoff Criteria, and Interaction Among All Players.

Have you ever had to deal with constantly changing software? As a test manager, I know the product should be released on schedule, but there are always additional changes that need to be tested. Here are four keys to a successful test plan, which will help testing keep pace with constantly changing software. 

Involvement of Test Team from the Beginning
The test process for a new feature should begin as soon as possible, ideally during the inspection stage, and not toward the end of the development stage. Involving testers from the beginning will help ensure that they'll be aware of changes as they come into the project.

Integration Testing
To be properly prepared, and to help contain bugs as early as possible, I have my team do integration testing with the new features. In some cases, the testers write test cases before receiving the requirements of the feature. As the test cases are written, a test case mentor reads the test cases and can either suggest other test cases or approve them as is. The testers work with the developers and perform integrated tests to exercise the new features as unit tests. The testers learn everything there is to know about that feature while they are testing it. As the integration testing continues, we begin to develop the areas where formal testing will be performed. Three types of tests are staged in the test process: System Testing, Acceptance Testing, and Automated Testing.

System Testing
Creates test cases around the requirements, but more of the white box, outside the functional level. It not only covers the new features, but also volume, stress reliability, performance, and compatibility testing of the product.

Acceptance Testing
Verifies the requirements have been met for each new feature. In addition, the product we test must maintain that new features do not break pre-existing requirements.

Automated Testing
Assists in regression testing, in that the rudimentary tests that stem from acceptance or system testing are run through in a methodical, scripted way. We perform automated testing after some manual testing has occurred and the results are verified. If we don't do this, we find that time can be wasted writing scripts that are not useful.

Identification of Handoff Criteria
We develop handoff criteria that are agreed upon between testers and developers, to indicate when formal testing can begin. Part of the handoff criteria includes the functional requirements that are written, inspected, and approved by the team. Without handoff criteria or requirements, starting formal testing too soon can delay regression testing until the very end of the test cycle. Just before formal testing begins, the test team conducts a group meeting with the developers and marketing representatives, in which each new feature is demonstrated and discussed by the person who tested it.

Interaction Among All of the Players
All stages of testing are communicated to the development team, technical leaders, and project managers. This gives them the opportunity not only to know what the plan is, but also to be able to suggest additional areas to test or to approve the plan. I stay involved at every stage and stay abreast of changes as they happen. I work with the key people in development to know when and what areas of testing need to be executed. I always review the final plan with them.

How can you help smooth out the process in a constantly changing software development project? Become familiar with new features early, during the inspection process; educate the test team on each new feature by integrating those features along with their development; develop handoff criteria for going

About the author

Chris DeNardis's picture Chris DeNardis

Chris DeNardis is a Engineering Manager, where he directs the test group for Johnson Controls Building Automation system, the Software Quality Lab and Field level Engineers. He has tried to automate much of the reporting features of testing, while keeping test process simple and easy to follow. He fosters team building and accountability. Chris has reviewed books such as "Lessons Learned in Software Testing" by Kaner, Bach, and Pettichord; and the book from Rex Black, "Critical Testing Processes." Chris is listed on the practicality gauntlet of STQE magazine as well as for StickyMinds. He monitors the StickyMinds.com Message Boards as an Industry Specialist. Chris’ philosophy is that regardless of one’s position, the most important title is "Student." Chris can be reached at chris.p.denardis@JCI.com or at cdenardis@wi.rr.com.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!