STAREAST 2004 - Software Testing Conference
PRESENTATIONS
Lessons Learned from End-to-End Systems Testing
End-to-end testing of large, distributed systems is a complex and often expensive task. Interface testing at this high level involves multiple sub-systems and often requires cooperation among many groups. From mimicking real-world production configurations to difficult project management and risk issues, Marc Bloom describes the challenges and successes he's experienced at Capital One in performing end-to-end testing. |
Marc Bloom, Capital One Financial Corp
|
Looking Past "The Project" with Open-Source Tools
It is often difficult for testers and test teams to look beyond their current project. However, software test automation works best within frameworks that address all projects not just one. Today many people and organizations are solving some or all of their test automation troubles with open-source tools that share solutions and development resources and support. Carl Nagle will demonstrate how to reap solutions from others solving the same problems and tap into external development and support resources. |
Carl Nagle, SAS Institute Inc |
Measuring Testing Effectiveness using Defect Detection Percentage
How good is your testing? Can you demonstrate the detrimental effect on testing if not enough time is allowed? Dorothy Graham discusses a simple measure that has proved very useful in a number of organizations-Defect Detection Percentage or DDP. Learn what DDP is, how to calculate it, and how to use it in your organization to communicate the effectiveness of your testing. From case studies of organizations that are using DPP, you'll find out the problems you may encounter and ways to overcome them. |
Dorothy Graham, Grove Consultants UK |
Objective Measures from Model-Based Testing
Many businesses are looking for the right project measures as they relate to project planning, scheduling, and performance. Mark Blackburn gives guidance on defining, collecting, and analyzing measures derived from a model-based testing method. These measures and their use are described in terms of an information model adapted from the ISO/IEC 15939, Software Engineering-Software Measurement Process. |
Mark Blackburn, Software Productivity Consortium
|
Ongoing Retrospectives: Project Reviews That Work
As evaluators of quality, testers can often identify critical software development problems during the process. So, how do you get other members of the development team to take notice? Lauri MacKinnon offers real-world case studies to illustrate how ongoing project retrospectives make for better testing and higher quality software. She describes ways to get objective data from project reviews done during the project, giving your team a better chance of making timely adjustments. |
Lauri MacKinnon, Phase Forward Inc
|
Pair-Wise Testing: Moving from Theory to Practice
We've all heard the phrase, "You can't test everything." This axiom is particularly appropriate for testing multiple combinations of options, selections, and configurations. To test all combinations in some of these instances would require millions of tests. A systematic way to reduce the number of tests is called pair-wise testing. Gretchen Henrich describes the process of integrating this technique into your test practices and offers her experiences testing multiple releases of a product using pair-wise testing. |
Gretchen Henrich, LexisNexis
|
Planned Chaos: Malicious Test Day
In a test and verification organization, it can be easy to fall into predictable ruts and miss finding important defects. Use the creativity of your test team, developers, users, and managers to find those hidden bugs before the software goes into production. Ted Rivera details how his organization conceived of, administers, evaluates, and benefits from periodic malicious test days. Learn ways to make your days of planned chaos productive, valuable, and, yes, even fun. |
Ted Rivera, Tivoli/IBM Quality Assurance
|
Preventing Web Service Security Breaches
Because Web services are especially vulnerable to security breaches, verifying the integrity of Web services is critical to successful deployment. By adopting specific white-box testing techniques at the unit and system level, testers can better ensure the security and dependability of the Web services application their company produces. Learn what you can do to test Web services for conditions and input data that are not expected and fix security problems before they harm your organization. |
Gary Brunell, ParaSoft Corporation
|
Quality Assurance and .NET: How to Effectively Test Your New .NET Applications
If your organization is migrating to .NET, you need to be concerned about how .NET will impact your department's testing and quality assurance efforts. First you need to understand the technology underlying .NET applications; then you need to learn what is different about testing applications using this technology. Dan Koloski provides an overview of .NET technologies and the special considerations you need to know for testing them. |
Dan Koloski, Empirix Software
|
Quality Metrics for Testers: Evaluating Our Products -- Evaluating Ourselves
Most programmers learn very little about testing techniques in school. This has a ripple effect through the software development cycle, often leaving quality issues until too late in the project. In this interactive, and hands-on session, you'll learn about and have a chance to experience practical, and even entertaining, methods for teaching programmers to be more proficient testers. Use this learning experience as an opportunity for team building while improving your development and test process. |
Lee Copeland, Software Quality Engineering |
Pages
Recommended Web Seminars
On Demand | Building Confidence in Your Automation |
On Demand | Leveraging Open Source Tools for DevSecOps |
On Demand | Five Reasons Why Agile Isn't Working |
On Demand | Building a Stellar Team |
On Demand | Agile Transformation Best Practices |