Automated Software Testing

[article]
How I fought the Software Testing Automation War and have Determined I'd Rather be a Stone Mason
Member Submitted
Summary:

This article discusses the road blocks I have encountered while attempting to acquire consulting engagements for myself and others. They are a result of the management misconception that all you have to do to automate software testing is to write test scripts. The immediate effect being that QA and testing functions are being staffed with test script coders at the expense of test designers. It all reminds me of a cartoon from the golden age of computer programming where the manager says to the programmer, "You start coding and I'll go find out what the system is supposed to do." The software testing corollary would be "you start testing and I'll go find out what we are supposed to test."

Two thousand and one has been a very depressed year for quality assurance and software testing employment in the St. Louis area and it has brought an issue to the forefront that I feel needs to be addressed. Under better economic circumstances major St. Louis corporations such as Anheuser Busch, Maritz, Monsanto, Ralston, etc. employ a fair number of QA and testing consultants, as well as, permanent employees. Recent circumstances have affected the employment patterns and have caused a problem that was buried to surface. By trying to get the most “bang for their buck” so to speak, employers have exacerbated the problem.

The problem has two aspects. The first is a lack of understanding of the software testing process and how automation fits. The second is a lack of understanding of the test engineer roles as they pertain to the testing process. We have reached the age of specialization and software testing is no exception. Test engineers can specialize in test planning, test design, and/or test implementation. Each role requires a different set of manual and automated testing skills. Test planners and test designers may work in parallel or may be the same person and they should possess similar abilities to define and document testing artifacts. Test implementers on the other hand are programmers when they are doing automated testing. They write and execute the test scripts and capture the test results.

As for the automation aspects of testing, I have worked for the past ten years implementing test automation for these and other local companies, and I have emphasized that an automated test script is not "intelligent" in and of its self. The intelligence is in the test data. The test data must contain smart tests that are based on the software’s requirements definitions and design specifications. Intelligent refers to the data’ ability to stress known/expected application weaknesses. The intelligence must be built into the test data by the test designer s prior to their use with the automated test scripts.

The test data must also include values that the script uses to navigate the application under test. The only way to insert smarts into a test script is to hard code the data values in the test script proper, or in a function/subroutine that the script calls. This is what happens when the tradition Capture/ Playback automation approach is used. The result is a test script maintenance nightmare.

The test designer must specify the test preconditions, navigation of the application under test, what aspects to test, and what posttest cleanup is necessary. This is all necessary information that the test implementer (script writer) must have to construct an effective automated test script. The test designer uses the requirements definition documents and the design documents to identify potential test conditions (areas of the application that may have a predisposition towards having defects) and their expected results. The test designer also uses any other information that describes the application that might be available.

Writing automated test scripts is not necessarily any better unless they execute intelligent test data. I have emphasized that developing test data design for automated tests must be coupled with manual testing so that the manual tests are also smart. I usually design test data by using the requirement and design documents while I have the application running on a test machine and MS Excel open on my desktop machine. When I identify a test requirement, I document the test conditions, environmental prerequisites for that requirement in an Excel workbook sheet. Then I manually execute the test data on the test computer. If it gives the expected results, I document it in my spreadsheet and move on to the next test condition/requirement. In this manner, I identify the test data, and document the pretest environmental variables and what post test clean up is needed to assure that each test is independent. The test data are exported to a format that an automated test script can use.

Those of you that frequent such user groups as the SQA users group and the DDE users group know that I am an advocate of the "Data Driven" framework. My associate, Bruce Posey has developed his on flavor of this paradigm that he terms “Control Synchronized Data Driven Testing.” Carle Nagle of the SAS Institute has a different perspective in that he has developed a “Data Driven Engine” using Rational Software’s SQABasic language and MS Excel.

The primary difference in the two approaches is that control synchronized data driven testing stores navigation rules and special test procedures as subroutines that the main test script uses directly, whereas in the DDE model these are tasks are executed by a special set of test script libraries that constitute the engine. In a sense the DDE is test automation middleware. The second difference is that the DDE engine is driven by a set of test tables that are developed as Excel spreadsheets, but which contain special commands that tell the engine where to go and what to test. In the control synchronized approach this information is attached to the actual test data that can be stored as CSV files or data pools.

The problem with either approach is that they both depend on the test data and extraneous information about those data to execute the actual test(s). If the data are not focused, the tests that are conducted are ineffective. The only way to focus automated testing is to develop test data from whatever software development process artifacts that are available to the Test Planner/Designer, or to pick the brains of the system analysts and designers.

Well, I have deviated quite a bit from the point of this article and for that I apologize, but a little background was necessary. The only current jobs, other than one or two calls for manual testers, in St. Louis IT organizations are those that involve writing automated test scripts. The problems this situation is causing include forcing test engineers to play all of the roles in the testing process, Test Planning Engineer, Test Design Engineer and Test Implementation Engineer. This was true to some extent even before the current economic down turn; however, the employment opportunities for test planner and test designers have all but evaporated. Even the positions that advertise for a broad background in test automation really only want people to code test scripts. Now, coding test scripts is an honorable profession, but most individuals I have encountered who enjoy implementing the tests by writing and executing the test scripts do not enjoy planning and designing the tests. Consequently, they do not put forth a good effort when asked to do so.

My personal work preference is test design and not test implementation. When I consult within the local IT organizations, the service I provide is that of test requirements identification and documentation, and test data design and construction. Since the beginning of the summer test designer consulting positions have virtually dried up. All of the pre-employment interviews I have attended lately have focused only on test script writing with an emphasis on how quickly you (the potential consultant/employee) can develop a suite of automated test scripts that will provide automated regression testing.

For example, the job description in Figure 1 below is from one of the St. Louis companies I mentioned at the beginning of this article. I was personally interviewed for this position and during that interview it came to light that there were actually two positions and that both were for test implementers not test designers. In fact, one position was from the beginning of November until the end of this year and the person who was to be hired must demonstrate substantial progress towards a set of regression tests by the end date. The only relevant parts of the description are the phrase “the creation of manual and automated test scripts” and knowledge of SQL and the DBMS as required skills. The rest of the verbiage refers to test planning and test design rather than test implementation. Why does a test scriptwriter need to have a working knowledge of the software development Capability Maturity Model (CMM)? There is no reason.

Needless to say I was not contracted for either of those openings. One reason was because I proceeded to explain to the interview committee that their automated testing effort was doomed to failure because they were ignoring most important aspect of automated test development, the design of intelligent tests. I explained why and offered my services in that area, but to no avail. They were determined to start writing test scripts with out doing the planning and design that leads to a successful automated testing implementation.

The point is that a suite of automated tests that shoot in the dark at the application to be tested are no better than random testing, or no testing at all. With respect to our geographic area and the companies that service it, this situation is QA and Testing management’s fault. When will these individuals take up the torch and do what it takes to implement software quality control procedures that are effective? I have seen two management types; those with no clue and those who pay lip service, but lack the follow through to have a successful automated testing function. As I said earlier, I have worked in a number of IT organizations and QA/Testing departs as a contactor and with one exception, I have yet to see test automation done properly.

With their tight/reduced budgets these same managers are, by not employing test designers to create smart tests, which attack known application weaknesses, not specifying what targets the test scripts must assault. Think about the war in Afghanistan. If the spotters were not on the ground to guide the high tech laser bombs, they would not hit the terrorist targets. All of the bombing would be ineffective unless they resorted to “carpet’ bombing such as was the case in World War II which required dropping tons of bombs at generalized targets in order to achieve results similar to those derived through the so called smart bombs. Carpet bombing requires tremendously more effort and resources.

The laser bombs are focused much as data driven tests are focused when they attack the application you are testing. Their accuracy is unprecedented and their target demolition rate is very high. The same can be said for the defect finding rate of focused tests such as those test data developed for data driven test scripts. An automated test suite that runs test that are not based on knowledge of who the application works are more akin to carpet bombing the application. A much larger number of tests is required and the defect finding rate of the tests is much lower, less than one third of the tests will find a discrepancy in the application. One reason for smart tests is software testing economics. The more effective the tests are at finding defects the bigger the return on investment.

This is much the same as the return on investment of smart bombs compared to those used in carpet bombing runs. The smart bombs cost more up front, but hit more targets while ordinary dumb bombs are cheaper, but many more are required to achieve the same end result. Designing intelligent tests cost more up front than just jumping in and creating automated test scripts, but the true cost of quality is seen after the system is released. The fewer the number of defects that survive in production software, the higher the quality and the lower the maintenance costs are.

The moral of this dissertation, if there is one, is that even in bad economic times QA managers must understand that there are no short cuts to test automation and that not acquiring the talent needed to do test planning and test design, is risking much higher costs after the software is released to the customers.

Consequently, I have started working with a friend who has a construction company as a stonemason until the current economic recession reverses itself. I find that doing something in the brick and mortar trade is refreshing and a lot less stressful than jumping into a doomed testing endeavor in an IT department where management obviously does not understand the testing process and does not have any intent to support an attempt to correctly and effectively implement an automated testing framework.

I will return to consulting when an appropriate opportunity reveals itself. Until then I will work at my new skill and when the situation again arises where I cannot tolerate the lack of management competence when it comes to understanding that a testing process must be in place, must be followed, and must be supported by talented and skilled individuals for test automation to be successful, I will fall back on my trade to carry be through.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.