By focusing on three simple but often overlooked methods, David Petrella’s test team consistently stays on schedule and delivers the testing results his projects expect. Learn how to develop and employ Risk Assessment documents to define the scope of testing and identify areas that cannot be tested with available resources. Publish an Entrance Criteria document that defines what resources (hardware, software, data, etc.) are needed for a successful test project.
STARWEST 2003 - Software Testing Conference
Migrating a testing infrastructure from data-driven test automation to a model-driven test architecture, including the need to maintain backward compatibility was a huge challenge for Michael Corning and his groupx0151but a very rewarding one. The test group manages three modeling tools supporting both state-based and grammar-based modeling techniques. Modeling became important and ultimately indispensable not only for product testing but for the test infrastructure development as well.
There are more than 7,000 possible configurations of operating systems, browsers, screen resolutions, and other unique characteristics in today’s computer environments. Learn about a flexible automation framework for functional configuration testing based on an approach developed by Plaxo, Inc. This approach uses multiple virtual operating systems with a pre-installed, commercial automation tool launched on a single Intel-based computer.
How can your test organization help drive improvement in the overall software lifecycle? During the past several years, Capital One Financial has developed a Testing Center of Excellence (COE) that brought together disparate testing organizations to align their test processes and technical discipline. In addition to the measurable results of the COE, this initiative has supported and encouraged similar improvements in requirements development, project management, systems architecture, and software development methods.
Adapting the Convergys's Advance Data-Driven Techniques (ADDT) process, the company has successfully automated testing of XML APIs. In a highly complex, PC based billing application, ADDT has been used to improve the reliability of the product and significantly reduce testing time. With this approach, automated scenario-based tests are implemented for XML APIs, and test case templates are generated automatically from schema. This technique is generic and can be used for all XML APIs.
If written requirements are important in your development process, then requirements traceability is a key to ensuring quality in your testing process. For you, every test plan should include traceability for both functional and performance testing. Learn about the sources and characteristics of good requirements and how to perform requirements tracing on your project. Develop a Requirements Traceability Matrix (RTM) for both functional and performance testing requirements.
"How do you know when you're finished?" A key process in this assessment is making good bug severity and priority assignment. Robert Sabourin presents a fun, interactive parable that teaches an important lessonx0151assigning bug priority and severity is a business decision, not a technical one. By having clear rules for how you assign severity to bug and applying them consistent, you'll go a long way toward making the right business decisions.
It is a daunting task to create a test organization from scratch. You have to obtain buy-in from key stakeholders, recruit the test team, develop their skills, build trust with the project members, and show the value of testing. George Toney shares his challenges and successes as he went through this difficult but rewarding experience at LexisNexis Group. From a mission statement at the beginning to post-implementation follow-up, discover how to build your new test team or improve the one you have.
Don't settle for rerunning the same automated test cases over and over again. Instead, get more mileage out of your automation! Learn how to add real-time variety and randomness to automated tests and make your data-driven test cases even more dynamic. Kelly Whitmill offers hints, guidelines, and tradeoffs for automated verification of test executions and tells you how to do automated verification when you can't know the expected results until runtime.