In many IT organizations, Quality Assurance (QA) staff are not dedicated to projects, but are "shared resources" supporting many projects simultaneously. Vast armies of QA staff execute defined scripts to test and certify an application once development is complete. Because they lack application familiarity and test only at the end of the development lifecycle, QA staff require significant execution support, and the feedback they provide is late in coming and often inaccurate. By comparison, on Agile projects, QA staff are dedicated team members. Testers are co-located with business and development staff. Because they collaborate with the development team on formulating acceptance criteria, and engage in testing continuously through development, QA feedback is timely and relevant. In the Agile approach, QA is less of an encumbrance and more a partner in delivery, increasing the efficiency of the software development process and the effectiveness of solutions produced.
The Brute Force Approach to Testing
A number of factors conspire against the development of a robust QA function. First, QA is perceived not as active producers but as passive reviewers of IT solutions . As a result, QA does not attract the same level of funding as other IT functions, such as application development or infrastructure. Second, few IT or business leaders have risen from the ranks of QA. Third, there is a dearth of QA leaders in the job market at large, and most organizations do not invest (and often do not know how to invest) in the professional development of QA people. Finally, automated functional testing tools are seen as instruments to replace testers, but automating tests through the user interface have historically been fragile and thus high maintenance. Together, these factors relegate QA to second-class status in many IT organizations, a situation further amplified by the fact that IT is itself a second-class citizen in the overall business.
Execution models have arisen in response, but not in opposition, to these headwinds. Lacking peer status of other IT departments such as infrastructure and application development, and lacking the leaders to bring to bear in every project, QA assumes the role of "solution auditor." Its mission is not, "how can we contribute to the technical quality and functional fitness of an application being developed." It is instead, "how can we prevent a technical problem or functional misfit from escaping to a production environment?" QA requires less depth of familiarity to fulfill the auditor role, and assigns testers and leads to work on multiple projects at the same time.
To execute to any degree of success as a shared service, the onus is on QA leads to find ways to leverage their time. In artifact-happy IT organizations, this leads to creation of large volumes of test scripts. The intent is to write scripts that exercise functionality of the application, and write them in such a way that just about anybody can execute them: press buttons, navigate screens, and compare results returned by the software to that proscribed by the script, passing or failing a script at any step of the way. The expectation is that QA leads shift from project to project writing test scripts, while the full force of QA testers can be brought to bear "on-demand" to execute those test cases. When all test cases pass, the application is certified.
There are many operational risks with this approach. It assumes that the test scripts are of high quality, and that feedback is timely and actionable. These are unwarranted assumptions. Like any IT artifact, test scripts may be of poor technical construction (ambiguous or confusing to testers) or of poor functional construction (they