Test Automation has come a long way in the last twenty years. During that time many of today's most popular test execution automation tools have come into use, and a variety of implementation methods have been tried and tested. Many successful organizations began their automation effort with a data-driven approach and enhanced their efforts into what is now called keyword-driven test automation. Many versions of the keyword-driven test execution concept have been implemented.
STAREAST 2007 - Software Testing Conference
Lightning Talks are nine five-minute talks in a fifty-minute time period. Lightning Talks represent a much smaller investment of time than track speaking and offer the chance to try conference speaking without the heavy commitment. Lightning Talks are an opportunity to present your single, biggest bang-for-the-buck idea quickly. Use this as an opportunity to give a first time talk or to present a new topic for the first time.
Metrics can play a vital role in software development and testing. We use metrics to track progress, assess situations, predict events, and more. However, measuring often creates "people issues," which, when ignored, become obstacles to success or may even result in the death of a metrics program. People often feel threatened by the metrics gathered. Distortion factors may be added by the people performing and communicating the measurements. When being measured, people can react with creative, sophisticated, and unexpected behaviors.
Does your testing provide value to your organization? Are you asked questions like "How good is the testing anyway?" and "Is our testing any better this year?" How can you demonstrate the quality of the testing you perform, both to show when things are getting better and to show the effect of excessive deadline pressure? Defect Detection Percentage (DDP) is a simple measure that organizations have found very useful in answering these questions.
Ten years of experience with test outsourcing at Polteq Lucent Technologies has shown that it can be successful. However, on the way to success, many-and sometimes painful-lessons were learned. Kees Blokland shares the most common test outsourcing mistakes others have made with the hope that you will not repeat them. One key mistake is the expectation of large and rapid cost savings-many that have been seduced by this temptation have not been successful.
The use of modular design in programming has been a common technique in software development for years. However, the same principles that make modular designs useful for programming-increased reusability and reduced maintenance time-are equally applicable to test case development. Shaun Bradshaw describes the key differences between procedural and modular test case development and explains the benefits of the modular approach.
OpenSTA is a solid open-source testing tool that, when used effectively, fulfills the basic needs of performance testing of Web applications. Dan Downing introduces you to the basics of OpenSTA including downloading and installing the tool, using the Script Modeler to record and customize performance test scripts, defining load scenarios, running tests using Commander, capturing the results using Collector, interpreting the results, and exporting captured performance data into Excel for analysis and reporting.
Test automation teams are often founded with high expectations from senior management-the proverbial "silver bullet" remedy for a growing testing backlog, perceived schedule problems, or low quality applications. Unfortunately, many test automation teams fail to meet these lofty expectations and subsequently die a slow organizational death-their regression test suites are not adequately maintained and subsequently corrode, software licenses for tools are not renewed, and ultimately test engineers move on to greener pastures.
Hiring great testers is the single biggest challenge that test managers face. Unfortunately the number of experienced testers is dwindling while the number of testers with weak skill sets is proliferating. Drawing on his experience of building an independent testing company, Krishna Iyer shares unconventional-yet quite effective-methods to find, hire, and retain great testers. He looks for testers outside the software world and has had success, for example, with auditors-they have the same inquisitiveness that makes testers great.
With mounting pressure to deliver high-quality applications at breakneck speed, the need for risk-based testing has increased dramatically. In fact, now practically everyone involved in testing claims to be doing risk-based testing. But are you really? Drawing on real-life examples,