Conduct Early and Streamlined Testing

[article]
Speeding the Software Delivery Process, Part 2

In the first article in this three-article series, I argued that speed doesn't necessarily sacrifice quality. Software can be developed faster and better, with the help of efficient testing and quality assurance activities. I listed the following ways of reducing the test cycle time and speeding the overall delivery process:
 

  1. managing testing like a "real" project
  2. strengthening the test resources
  3. improving system testability
  4. etting off to a quick start
  5. streamlining the testing process
  6. anticipating and managing the risks
  7. actively and aggressively managing the process 

The first article in this series discussed points 1 and 2 in the above list. This, the second article, will discuss points 3-5. Later, the third article will finish the list, discussing points 6 and 7.

3. Improve System Testability

Build a cleaner system in the first place, so that less test-debug-fix cycles of rework are needed. Several quality practices are well known, which also help developers build cleaner systems: requirements modeling, component reuse, defensive programming practices, code inspections, unit testing, and source code control.

Establish test entry criteria. Obtain consensus from the project manager and developers on the criteria that will be used to determine if the system is ready to test, and the testware is ready.

Run a smoke test, and do not accept the system for testing until it passes the smoke test. Waiting until the smoke test passes may appear to delay the start of testing, but, if handled right, publicizing the fact that a smoke test will be done can exhort the developers to greater efforts to meet the test entry criteria.

Increase the early involvement of testers in the project. The testers need to climb the learning curve early and have enough time to master the system, determine how to test it, and prepare test cases and the test environment. The testers should actively participate in developing the overall project work plan.

This means that the testers need to be involved from the very beginning of the project. If this does not happen, people who are relatively ignorant about testing will commit the testers to unreasonable deadlines. The testers also will not buy into the dates that are externally (and perhaps arbitrarily) imposed on their testing efforts.

Ensure that the testers have a thorough understanding. They need to understand

  • the project goals and success factors
  • the project context
  • the system functionality
  • the system risks and vulnerabilities
  • the test equipment, procedures and tools

Use design for testability (DFT) reviews to instrument and place probes into the system being tested, to increase observability. DFT is intended to give black-box testers access to the hidden internal behavior of the system (and sometimes this behavior can be very short-lived).

Encourage a common system architecture and component reuse. Though these areas are not generally the primary concern of testers and QA analysts, a common architecture across a family of systems and planned component reuse can drastically shorten the test time and the overall development cycle time.

The development teams may be too busy or too partisan and involved to manage the overall architecture and systems framework, and similarly too busy or too partisan to manage a reusable component library. A better place to manage these activities may well be the QA group.

Stabilize the system being tested as early as possible. Place the system to be tested under change control and version control, and establish a cutoff date beyond which no changes are allowed except emergency showstopper fixes (the code freeze date).

Set the cutoff early, even if this means reducing functionality, in order to allow a decent period of time after the fix for final testing and fixing of the stabilized version (at least two weeks for small systems and at least one month for large complex ones).

Stabilize and control the test environment. An argument can be made that more stop-and-go interruptions of test execution are caused by gremlins in the test environment than by any other cause. Ideally, the test environment should be established and tested comfortably before it is needed. The test environment should include tools to manage the test configuration, diagnose problems in the environment, and easily reconfigure the environment from test to test.

4. Get off to a Quick Start

Start the test planning early. As was mentioned earlier (it is worth repeating), allow plenty of time up front in the project for the testers to climb the learning curve, develop test cases, conduct peer reviews of test cases, learn automated test tools, etc.

Ensure the right people are involved from the very beginning. To ensure the right functional specs are written at the beginning of the project, it is important to have the right talents and viewpoints on the project team. Otherwise, much time is lost when the project has to be reengineered.

While team formation is the responsibility of the project leader, an appropriate role for QA is to verify that the correct mix of marketers, customer representatives, hardware engineers, documentation, and customer support repsare involved.

Be proactive. Many test and QA groups operate reactively: they are so busy with the test du jour that they have no spare time and energy to become proactively involved in future projects during those projects' formative stages.

In other words, if you are currently involved for twelve hours a day testing System A, you have no time to get involved early in System B, which currently is being defined and which will be your next testing assignment after System A. It takes foresight and adequate resources to cover both the test execution for System A and the test planning for System B simultaneously.

Be organized and well prepared for the test. Involve the testers early in reviewing the system requirements for testability. Set up and check out the test facilities and test case automation early, in parallel with the system development activities.

Remember the "push-pull" effect. Any efforts to prepare and build test infrastructure must happen in the first third of the test duration; after that, it is too late in the scramble to finish the project. Test automation especially needs a long lead time, and is not feasible on test projects that are done at the last moment.

5. Streamline the Testing Process

Automate more of the test cases. A test group at ADP, the large information services firm, reports that they reduced the elapsed time for a group of their testing projects by 60 percent through automation.

There is a danger here, however-most test automation efforts are not effective, which imperils the timely delivery.

Use risk prioritization to focus the testing on the most critical areas of the system. Use the concepts of risk-based testing to prioritize the test cases and determine how much test coverage is sufficient. According to research at Bellcore, the top 5 percent of the test cases uncover almost 50 percent of the defects, and the top 15 percent of the test cases uncover 75 percent of the defects.

Reuse test plans, templates, and test cases. The idea is to avoid reinventing the wheel, to reuse known and field-tested test cases, and to provide consistency across tests. In most cases, automation is a prerequisite for reuse of test cases.

A coordinated, well-organized test case library or repository also is needed, which means that a test librarian must be appointed. Expecting the test cases to somehow organize themselves, or expecting a group of testers to somehow coordinate a test library among their other busy demands, is naive.

How do we make decisions fast, and the right decisions? By (a) being well-informed, up to date, and on top of things, (b) empowering the test team, (c) centralizing decision-making authority in one person and making sure this is the right person to be the team leader, and (d) making sure the team is motivated by a sense of urgency.

The trick, of course, is to be responsive without disintegrating into disorganized chaos, which can happen if there is too much change. At that point, the better strategy is to try to stabilize the project and determine whether all the changes to the project are really necessary.

Increase test throughput. Often, the amount of time expended just in running the test cases can be a major source of delays. For example, the testers may have older, slower equipment-the "hand-me-downs" from the developers. Or there may be a shared environment, where the testers have to wait for their turn, after developers, trainers, and others have finished with the equipment.

There are many actions that can be taken to pump the same volume of test
cases through faster, thus decreasing the elapsed time to execute the suite of test cases:

  • If there are many contentions for resources in a shared test environment, provide the testers with a dedicated test lab.
  • In an expensive networked test environment, run the testing on three shifts. That will cut the elapsed time by as much as two-thirds. Or double the amount of equipment in the test lab, if that will halve the test duration.
  • On a mainframe, negotiate for more test partitions or resources (though hopefully not on the Sunday night graveyard shift, which always seems to be the most easily available time).
  • On a workstation with an automated test tool, consider upgrading to a more powerful, faster workstation.
  • For manual testing, consider hiring contractors or temporary workers to leverage the in-house testers. In the last situation, the test plans and test cases have to be sufficiently clear and detailed for outsiders to be able to follow them.
     

Start the test execution early (or at least on time). A major cause for the late completion of testing projects is making a late start on the test execution. Work with the developers to help them avoid delays in having a system ready to test. Explore ways to start a partial test early with a prototype or early version of the system, in parallel and overlapping the completion of development.

This means the testing has to begin without a complete system to test, which will influence when test cases can be run initially, and the sequence of testing.

Also, employ entry criteria to determine whether a system delivered for testing is in fact ready to test and avoid false starts. Entry criteria push responsibility for delivering a testable system back to the developers, so they may decrease the test time and at the same time increase the development time, so that there is no overall net saving in elapsed time.

Test in parallel with the debugging and fixing. The easier way to run a test project is to require that the system being tested is stable during each test cycle: no changes are allowed during the test. However, this can lead to long turnaround times to test, debug, and fix a version, especially if this test cycle has to be repeated many times before the testing is done.

With overlapping testing and fixing, the turnaround time can be reduced. The danger is that a test that was just executed may no longer run the same way after the fix is introduced into the system. The fact that a test case worked correctly before the change in no way guarantees that it will still pass after the change.

In apparently unrelated (e.g., structurally decoupled), low-risk areas of the system, where the chance of error propagation caused by a change is low, this overlapping of testing and fixing can be acceptable.

Partly overlap the phases of testing. Traditional testing proceeds in linear sequence of phases from unit to integration to system to acceptance testing. Unless the system is already unusually clean prior to testing (in which case these phases can be hurried through), it is not a good idea to skip phases. The purpose of the unit test, for example, is to ensure that each component is as clean as feasible prior to the more complex multi-component integration and system testing.

These phases can be overlapped to expedite the overall test process. With careful coordination, the build and integration test could begin before the unit test is 75 percent complete. The system test could begin when the integration test is only 75 percent complete.

For example, partial builds and integration testing can proceed in parallel with the completion of the unit testing. System testing can begin without all components being ready and present in the system version under test. Acceptance testing could proceed in parallel with the system test, provided the client understands the system may appear more buggy than normal.

Move to a daily (or at least a more frequent) build cycle. Frequent build cycles use incremental regression testing with a steadily increasing set of automated regression test cases, and can encourage an earlier start to integration testing.

Improve the coordination among the groups involved in the test, debug, and fix cycles. Often unnecessary delays occur because of bureaucracy, lack of coordination, and misunderstanding. The overall project manager and the test manager should be on top of this, but often they are distracted by other issues.

Empower the test & QA people by building a mutual respect between these people and the ones who are building and maintaining the system. Not listening to the advice of the testers often causes delays.

Limit the fixes to be made prior to system release. Restrict fixes to the unambiguous showstoppers (severe problems), to reduce the rework and retesting. Establish a process to review and approve the fixes that must be made before the system is released, and minimize the number of retest cycles before delivery.

Tighten version control and change control. Lack of adequate control over changes and of system and component versions can lay waste to any schedule. This control needs to be applied in development, in testing, and in the field. It applies to platforms, configurations, and documentation as well as to the software itself.

Either a software configuration manager (SCM) independent of both developers and testers should perform this role, or if there is not a separate SCM, then the QA group should undertake the responsibility. To assume that the developers will simply coordinate versions among themselves is a fatal mistake.

Write problem reports that facilitate debugging and fixing, and don't impede it. A problem report is the mechanism that initiates the follow-up debugging and fixing actions. An effective problem report is clear, accurate, and useful to the person who will be doing the debugging.

Provide incentives for on-time completion of testing. Even if the test project leader has to change the budget, it is a good idea to provide incentives for timely work, such as a test retrospective review meeting on Kauai or at least a jolly good free lunch for the test team.

Refresh the testers. Improve test productivity by periodically sending the testers to expensive spas, to recharge their batteries and to show you really love them.

Better yet, go yourself and leave them to finish the testing. This shows them the rewards that can accrue when they are successful and get promoted to your elevated rank in the organization.

Take training programs in the key software tools and the technical environment. Ensure that everyone on the test team knows how to use the equipment and tools. Even learning to type well can improve productivity. Because of the push-pull effect, these training programs need to be taken early in the project-beyond the first third of the project, it is too late.

Use technology smartly in order to improve productivity. Email, voice mail, cell phones, and teleconferencing can be great time savers if used well.

Encourage proactive decision making within the test team. Empower team members to make their own decisions wherever possible. This helps eliminate bottlenecks caused by one person who insists on keeping all the power, or by requiring approvals from the test project leader at every step along the way. In other words, let the inmates run the asylum.

Make the feedback loop to the software engineers as fast as possible. Test software quickly after it has been written or modified, and rapidly report the test results to the authors while the code is still fresh in their minds. The longer the code is allowed to age before testing and feedback, the more the developers will have to become reacquainted with it and the more buggy the fixes are likely to be.

Monitor and manage the lengths of queues, in order to minimize wait times. Queues mean delays. They are caused by bottlenecks, which usually in turn are caused by resource imbalances. For example, there could be a long queue of problem reports, waiting for developers to work on them. Or there could be a long queue of newly coded features, waiting for the testers to get to them.

Get the project done and worry about the paperwork later (or never). Test now, focus on getting the most important bugs fixed quickly, and catch up with the test case and test results documentation later.

Initiate a brainstorming session with the test team, to help identify work-saving and streamlining measures. Sometimes useful ideas are overlooked (and these ideas often appear obvious in hindsight), simply because nobody asked.

Read Part 1: "Manage and Strengthen Testing"
Read Part 3: "Manage the Risks and the Process"

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.