Avoiding Blunders that will Hamper Your Testing Efforts

[article]
Member Submitted

Missing Naming Conventions–Nomenclature
I recall a project that had automated over 1000 test scripts from various testing releases but did not have any structured naming conventions for the automated test scripts, or for naming the test folders where the automated test scripts were saved within the test management tool. This project had difficulty finding automated test scripts from one release to the next within the automated test management tool, and often times the testers within this project re-develop automated test scripts that already existed since the existing automated test scripts were difficult to find within the automated test tool. One of the objectives of having automated test scripts is to be able to re-use them as much as possible from one software release to the next and this project was forfeiting this benefit since the testers could not find the previously automated test scripts.

A test management tool serves as a repository for storing test scripts and test cases, but its utility will be greatly diminished if the QA team does not enforce a rigorous and logical approach for naming the test scripts and the naming of folders where the test scripts are saved. A viable approach for naming test scripts would be after the business processes that they represent or requirement identifiers that they represent. Below I describe an example for naming test scripts and test folders based on business processes related to human resources tasks. Regardless, of the approach that a project uses to name test scripts and test folders the naming standards should be robust, rigorous, consistent, resonate with the testing team and be widely accepted by testers and process teams.

If one is testing a human resource application one could create folders representing the various business processes associated with human resources such as recruiting, payroll, benefits, performance appraisal, training, organizational development, etc. Once the folders names are created and named the subfolders are created that have names that demonstrate more specific granularity for a particular human resources business process. As an example under the recruiting folder one could have subfolders for reimbursements, resumes, status of applicant, interviews, etc. Within the subfolders the test scripts are saved and stored with proper sequencing, release name, and identifiers as part of the naming conventions (i.e. HR_Recruit_Status_Rejected_releasename_001, where "rejected" represents a candidate who was not hired).

The business process teams and testers should thoroughly understand the naming standards and the naming standards should be enforced as part of the QA standards and procedures. Any saved test script that does not follow the project's naming standards should be removed from the test management tool, or renamed until it is compliant with the project’s naming conventions. Naming standards help to ensure that test scripts can be tracked, maintained, and reused once stored within a test management test tool.

Inconsistent terminology
I have been in projects where testers and test managers have problems communicating with one another because each party is either using the same terminology to describe different testing artifacts or different terminology to describe the same testing artifacts. As an example, in one project that I consulted for the term "mega testing" meant the same as the term "integration testing" and some testers within this project used the term mega testing to describe integration testing while other testers were not acquainted with the term mega test and only used the term integration test.

Test managers and tester should have consistent definitions and terminology for referring to the same test artifacts for instance a test script is not the same as a test set, and a test case is not the same as a test procedure, etc. The testing definitions should be articulated, documented and explained to all the testers for the various testing phases, and test artifacts. The test manager should draft the project’s definitions for: test cycles, test scenarios, test procedure, test set, test case, test script, test phases (i.e. performance, regression, functional, smoke, stress, string, volume, load testing, etc), test plan, etc.

I can recall numerous meetings where test managers and testers thought they were at odds or confused with one another when in fact they were discussing the same issue and agreeing on the point of discussion but did not know this since they were using different terms.

End users have no access to shared files or shared drives
During the end users and customer acceptance and verification tests the customer and end user representatives execute their testing tasks at the site where the application is currently being developed and constructed before the application is actually deployed or released. The end user and customer representatives might not know the project's testing procedures and testing standards or how to access information on a particular shared drive or from a test management test tool that requires a log-on id. Consequently the end users and customer representatives’ inability to quickly access information from a test repository or shared drive hinder their ability to perform the end user or customer acceptance testing tasks.

Test managers should obtain a list of all end users and customer representatives participating in the end user test or customer acceptance test before these testing phases are scheduled to begin. The test manager should request from the helpdesk or the system's administrator temporary user access with the type of access levels for all the participants in the aforementioned test phases. This practice will remove a commonly encountered obstacle for the end users and customer representatives.

Ignoring lessons learned
I consulted for a project that ignored recommendations that were made based on previous hands on testing experience. The QA manager was very skilled and adept at working with CRM applications but every time he recommended a course of action to ameliorate or streamline the status quo or the existing testing process he encountered opposition from the project director and thus the QA manager had to prove all of his recommendations with time consuming presentations, meetings, and proposals. The project was politically charged and highly bureaucratic and all testing decisions even trivial ones had to be made by committee with individuals that had no previous testing experience or knowledge of automated test tools or knowledge of CRM applications.

Testing should not be political and rather it should be predicated on world class and proven testing practices. An experienced QA manager that brings knowledge of lessons learned from one project to the other should not encounter stiff opposition from upper management or capricious barriers to implementing decisions based on lessons learned and actual hands-on experiences. As the adage goes those who forget history are doomed to repeat it is also applicable to testing, those who ignore lessons learned will not learn from past mistakes and will repeat them,

Inconsistent Templates
I was involved with an ERP project that was implementing solutions for Human Resources, Supply Chain, and Finance. The project had several process teams implementing these ERP solutions. Each team had its own form or template for creating their RTMs (requirements traceability matrices), documenting test cases, collecting and reporting test results, documenting and feedback from peer-reviews, etc.

Managing, discerning and understanding all these distinct forms and templates from the perspective of quality assurance, and auditing created a logistical nuisance.

I would recommend that the quality assurance team implement the necessary standards and procedures to help the business process teams, and testing teams work with uniform and homogenous templates and forms for creating the various testing artifacts for the project and facilitate the test planning activities.

Acquisition and purchase of useless automation test tools
Do not purchase or acquire automated test tools unless they can support the recording and playback of test scripts against your IT environment. This blunder might sound self-explanatory and intuitive but I have seen and witnessed many projects that proceed to acquire expensive test tools and spend training dollars on testers for automated test tools that are useless for the project's testing needs.

Automated test tools do not always recognize an application's custom objects no matter what the test tool vendor promises. The test manager should ascertain that the test tool that the company is planning to acquire would support the test team’s testing objectives.

Waiting until the last moment to introduce test tools and train testers
I witnessed a project that made its decision to use the existing testing tools with only 3 weeks remaining before the commencement of the test execution phase. The testers needed to learn the test tool, which took 5 days of training and thus consuming valuable time from the test planning activities. Furthermore all the test cases and test scripts that the testers had documented in a word processor now had to be transformed and loaded into the test management tool which necessitated the development of time consuming and complex shell scripts and macros, which took over 2 weeks to accomplish.

The results of introducing the test tools so late in this project were disastrous since the testers did not have access to all documented test scripts in the test management tool and now had to rely on two applications or sources for accessing the documented test scripts and test cases. And secondly the testers had very little experience in working with the automated test tools since they had just obtained training on these tools and they lacked actual hands-on experience with the tools and this impeded the testers' ability to effectively work with the automated tools during the execution phase.

A recommendation is hereby made to introduce the test tools as soon as possible during the project so that all test artifacts are documented within the test management tool as opposed to documenting test artifacts in one application and later creating macros and programs to move test scripts from one application to the other. The test team should create test artifacts in a single source and that should be the test management tool to avoid duplicating effort in multiple applications. Also the test team should have training of the test tools before the test planning cycle begins so that the testers will be adept at working with the automated tools before the commencement of the execution phase.

Information not effectively shared across teams
A chemical company that I consulted for had ever changing requirements that were not propagated or updated to the official requirements document. The chemical company had unreasonable requests for customizing the ERP application and frequently it changed the scope of the ERP implementation without notifying the testing team. The result was that testers created Requirements Traceability Matrices, and test cases on requirements that were obsolete or for business processes that were out of scope.

Testers wasted and frittered their time away on tasks that were not applicable to the release in question. Information about changes to requirements or scope should cascade from the customer to the company implementing or constructing the application under test as soon as possible. In turn the test manager or the managers in charge of the IT project should apprise the testing teams of any changes to requirements or scope to avoid needlessly working on tasks that are out of scope.

Waiting until the nth hour to identify test data–Mad rush for test data
In order to test out of the box ERP applications that have integrated data across all business processes it is imperative to document test cases and test scripts with valid data combinations that do not have any business rules violations. ERP solutions may have unique data constraints (i.e. a created material cannot be re-created with the same number), data combination requirements (i.e. can only select a material from the plant in question), transactional data does not exist (i.e. cannot change an invoice that has not already been created), data consumption constraints (i.e. once an order is shipped it cannot be re-shipped and a new order needs to be created), or data dependencies (i.e. in order to create a material you first have to create a company code), etc that need to be carefully integrated within the documented test scripts.

I can recall a project where it was necessary to test the same ERP business process with multiple sets of data thus creating multiple permutations for executing the same business process. For instance in SAP R/3 it is possible to create a material with different views and different populated fields thus causing multiple permutations to create a material that need to be tested with data driven parameterized test scripts. This project decided to first record the test scripts with one iteration of data and then wait until 2 days before test execution took place in order to identify the remaining sets of data to test its ERP application. This approach was a dismal failure for the project since the testers could not identify all the data values and their dependencies for testing all of the business processes in 2 days and it actually took them 2 weeks with much help from the configuration team to identify all the necessary data values to test the application and thus delaying the testing phase.

I would strongly urge companies implementing data sensitive applications and in particular ERP applications to identify all the necessary data sets and pre-validate the identified data with at least 2 weeks remaining before the actual test execution phase.

Problems with methodology
Another problem with the methodology was that the company never updated or underwent continuous improvements with its existing software lifecycle methodology. Additionally, the company expected the testers to rigorously adhere to its methodology for testing all of the applications that its projects was supporting when the testers in fact were not indoctrinated or acquainted with the company's methodology.

I would recommend that a company continuously improves its existing methodology and assess whether its methodology is consistently applicable to all of its applications under test. The company also needs to review and monitor the test results and problems that are discovered for its applications with the existing methodology. While a methodology or software model might help in ensuring that a company has documented, consistent and repeatable processes for testing a system this in and of itself is not enough to ensure that the applications are being constructed with quality. It is entirely possible to consistently and repeatedly produce low quality applications even with the most acclaimed or renowned software methodologies or software models.

Also the company needs to create methodologies coaches, or champions to help uninitiated testers or new project members get acclimated with the company's methodology.

Testing team members separated from other teams
Testers need to work closely with members from other teams such as programmers, developers, process teams, subject matter experts, analysis team, database administrators, etc. And given the complexity of some IT applications and out of the box solutions such as COTS, ERP, and CRM systems it is of paramount importance that the testing team members have adequate access to the members from the other groups in order to document and create test cases, test sets, identify test data, test scripts, test scenarios, RTVMs, etc.

I consulted for a computer maker in the Midwest where the project director had imposed an unrealistic schedule for configuring and implementing and CRM solution and the project director did not want the testers to interrupt the work of the developers and configuration team for a period of 2 months. In essence the project director had placed a “separation wall” between the testers and the other teams within this project. This decision was ill conceived and delayed the test planning activities. It behooves the reader to understand that in order to build a system with quality and bug-free much interaction will take place between the testers and the other members from the project no matter how stringent and unrealistic the project's deadlines might be.

No accountability for issues database–issues falling off the edge of the earth
I witnessed a project where testers reported all their issues by sending emails and no team or individual in particular took ownership of these reported issues and problems, and consequently the issues fell of the edge of the earth. A testing issue might be missing test data, or an ambiguous test requirement, or a missing use case that accompanies a test case, etc. The reader should note that testing issues are not the equivalent to testing defects.

A more appropriate means for reporting issues is either to create an in house issues database or repository, or to purchase a solution from a vendor. All the stakeholders and participants in the testing phase should log testing issues in a central system with automatic workflow notification and assign a team or individual responsible for resolving the issues. Issues should have priorities and the most critical issues that need immediate resolution should be reviewed every 72 hours to ensure that it has been addressed and successfully resolved. The objective is to resolve issues as soon as possible and prevent them from getting ignored or falling through the cracks.

Official source of requirements not updated–Multiple sources for requirements
A client of mine was implementing a CRM solution and testing it based on official requirements, where the requirements resided in different sources with different versions and the official requirements document was not updated even when a change request was created which as the triggering mechanism to update the official requirements document.

In this project the test manager had to spend hours to manually reconcile the differences between the official requirements document and the change requests and the differences between the officials requirements document and other unofficial requirements document. The official requirements document had over 1000 pages! Exacerbating the problem of having an outdated requirements document was that testers were confused as to what were the valid requirements that needed to be tested and which requirements needed to be verified with an RTVM (requirements traceability verification matrix).

Projects, testers, and test managers are better served with an up to date official requirements document that is not stored in multiple sources with different unofficial versions. The official requirements document should be updated after all change requests are made official and have been approved. The project should have only one official requirement document that is the only source for testing the application and thus does not require the test manager to manually reconcile all the requirements that need to be tested.

This seems like a simple concept to grasp but nevertheless it has been overlooked at some projects that I consulted for.

Testing as a democracy–Too many chefs in the kitchen
A project should have one test manager and not a plurality of test managers. The concept of having too many chefs in the kitchen is equally applicable to having too many test managers in a project.

I was at a project where the project director created a project structure consisting of 2 test managers with overlapping responsibilities but for different testing phases where the test managers had to ambiguously support one another with their tasks and teams. In this very project the test managers had to make decisions as a group and reach consensus on decisions. This practice was counterproductive and inefficient since the test managers did not agree on the test approach and one of the test managers was much experienced than the other. Even the most simple managerial test decisions took several meetings and several weeks to make and implement when in fact a single experienced test manager could have made and implemented the same decisions within hours.

Project managers and project directors should clearly and unequivocally comprehend that testing is not a democracy and that it is unproductive to make test decisions (i.e. acquisition of automated test tool, creation of test case templates, documentation of test scripts, etc) as a committee. The project director should appoint a single test champion as the test manager where the individual test manager makes informed decisions based on actual hands-on experience as to what the test strategy will be for automating test scripts, producing test artifacts, documenting test results, test planning, test execution, etc.

Manual collection of ambiguous test planning metrics
I consulted for a project where the test manager had to manually collect from multiple testers the number of test cases, test scripts, test sets that the testers had developed, created, started, continued, revised, completed, etc. The purported objective of this activity was to collect metrics measuring the testing team’s progress for the customer and the project director. The project director wanted to see metrics for actual completed tasks versus planned tasks as a way of measuring the testing progress. As an example the project director wanted to see the test scripts planned for completion on a daily basis versus the actual number of completed test scripts.

The test manager never had reliable metrics and had to manually update the collected metrics daily, which was tedious and inconvenient for the testers and the test manager alike. Furthermore to compound or exacerbate the problem the testers could not reliably provide metrics since they were unsure as to the number of total test scripts that would be completed during the test planning phase and the status of a test script at any particular moment in time. For instance a tester may have started and completed a test script and reported a metric showing the test script as “completed” but the following week the previously “completed” test script was “revised” or modified which changes the test script from “completed” to “revised” and now the test manager has to manually modify this test metric and the tester has to keep track of this metric too. At this project it was possible for a test script status to fluctuate daily or weekly.

A better approach to collecting real time metrics would be to either create an in house test management tool or to purchase an entire test management tool solution from a vendor such as Mercury Interactive or Rational Corporation. Within a test management tool the test manager can automatically generate ad-hoc reports showing the testing progress with graphs and charts. Furthermore a test management tool would have up to date real time data, which the test manager does not need to manually update since the test management tool keeps track of the entire test planning information. Additionally a test management tool will help a test manager look at trends and patterns such as the number of open and closed defects, which might help to meet the checklist for the exit criteria (i.e. a decreasing number of defects for the last 3 weeks).

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.