Conference Presentations

STAREAST 2006: Testing Dialogues - Technical Issues

Is there an important technical test issue bothering you? Or, as a test engineer, are you looking for some career advice? If so, join experienced facilitators Esther Derby and Johanna Rothman for "Testing Dialogues-Technical Issues." Practice the power of group problem solving and develop novel approaches to solving your big problem. This double-track session takes on technical issues, such as automation challenges, model-based testing, testing immature technologies, open source test tools, testing Web services, and career development. You name it! Share your expertise and experiences, learn from the challenges and successes of others, and generate new topics in real-time. Discussions are structured in a framework so that participants receive a summary of their work product after the conference.

Facilitated by Esther Derby and Johanna Rothman
Hallmarks of a Great Tester

As a manager, you want to select and develop people with the talents to become great testers, the ability to learn the skills of great testers, and the willingness to work hard in order to become great testers. As an individual, you aspire to become a great tester. So, what does it take? Michael Hunter reveals his twenty hallmarks of a great tester from personality traits-curiosity, courage, and honesty-to skills-knowing where to find more bugs, writing precise bug reports, and setting appropriate test scope. Measure yourself and your team against other great testers, and find out how to achieve greatness in each area. Learn how to identify the great testers you don’t know that you already know!

  • The personality traits a person needs to become a great tester
  • The talents a person needs to become great tester
  • The skills you need to develop to become a great tester
Michael Hunter, Microsoft Corporation
Trends, Innovations and Blind Alleys in Performance Testing

Join experts Scott Barber and Ross Collard for a lively discussion/debate on leading edge performance testing tools and methods. Do you agree with Scott that performance testing is poised for a great leap forward or with Ross who believes that these "silver bullets" will not make much difference in resolving the difficulties performance testing poses? Scott and Ross will square off on topics including commercial vs. open source tools; compatibility and integration of test and live environments; design for performance testability; early performance testing during design; test case reuse; test load design; statistical methods; knowledge and skills of performance testers; predicting operational behavior and scalability limits; and much more. Deepen your understanding of the new technology in performance testing, the promises, and the limitations.

  • The latest tools and methods for performance testing
Scott Barber, PerTestPlus, and Ross Collard, Collard & Company
Five Core Metrics to Guide the Testing Endgame

By its very nature, the endgame of software projects is a hostile environment. Typical dynamics include release pressure, continuous bug discovery, additional requirements, exhausted development teams, frenzied project managers, and "crunch mode"-a politically correct term for unpaid overtime. Although testing teams are usually in the thick of this battle, they usually do not do enough to help guide the project in this critical stage. To improve the overall endgame experience, testers can help entire team’s focus with a few key defects metrics. Robert Galen discusses ways to track these five key defect metrics: found vs. fixed; high priority defects found; project keywords; defect transition progress; and functional distribution of errors. Join Robert to increase the likelihood of delivering your projects on time-and surviving yet another endgame.

  • Help traffic the action for the incoming defect stream during the endgame
Robert Galen, RGCG, LLC
PairWise Testing: A Best Practice that Isn't

By evaluating software based on its form, structure, content, and documentation, you can use static analysis to test code within a program without actually running or executing the program. Static analysis testing helps us stop defects from entering the code stream in the first place rather than waiting for the costly and time-consuming manual intervention of testing to find defects. With real-world examples, Djenana Campara describes the mechanics of static analysis-when it should be used, where it can be executed most beneficially within your testing process, and how it works in different development scenarios. Find out how you can begin using code analysis to improve code security and reliability.

  • The mechanics of automated static analysis
  • Static analysis for security and reliability testing
  • Integrating static analysis into the testing process
James Bach, Satisfice Inc
Credibility: Your Key to Success as a Test Manager

For test managers and testers, credibility is everything. Without credibility, people won't take you seriously or believe your findings. There are very specific and achievable things every test manager can and should do to make sure the information conveyed to stakeholders is accurate and reliable. Randall Rice talks about the credibility factors you need to exhibit for success: knowledge, attitude, objectivity, accuracy, trust, and attention to detail. With real-world examples, Randall teaches you to build long-term trust with creative ways to document test findings and present to your stakeholders the information they want-when they need to know it. Take away a list of eight credibility killers, and learn how to rebuild you team's credibility once it is lost.

  • A template for assessing your team’s present credibility rating
  • Ways to deliver accurate and timely information to all project stakeholders
Randy Rice, Rice Consulting Services Inc
Inside The Masters' Mind: Describing the Tester's Art

Exploratory testing is both a craft and a science. It requires intuition and critical thinking. Traditional scripted test cases usually require much less practice and thinking, which is perhaps why, in comparison, exploratory testing is often seen as "sloppy," "random," and "unstructured." How, then, do so many software projects routinely rely on it as an approach for finding some of its most severe bugs? If one reason is because it lets testers use their intuition and skill, then we should not only study how that intuition and skill is executed, but also how it can be cultivated and taught to others as a martial art. Indeed, that's what has been happening for many years, but only recently have there been major discoveries about how an exploratory tester works and a new effort by exploratory testing practitioners and enthusiasts to create a vocabulary.

Jon Bach, Quardev Laboratories
Your Development and Testing Processes Are Defective

Verification at the end of a software development cycle is a very good thing. However, if verification routinely finds important defects, then something is wrong with your process. A process that allows defects to build up-only to be found and corrected later-is a process filled with waste. Processes which create long list of defects are . . . defective processes. A quality process builds quality into the software at every step of development, so that defect tracking systems become obsolete and verification becomes a formality. Impossible? Not at all. Lean companies have learned how wasteful defects and queues can be and attack them with a zero tolerance policy that creates outstanding levels of quality, speed, and low cost-all at the same time. Join Mary Poppendieck to learn how your organization can become leaner.

Mary Poppendieck, Poppendieck LLC
Test Metrics in a CMMI® Level 5 Organization

As a CMMI® Level 5 company, Motorola Global Software Group is heavily involved in software verification and validation activities. Shalini Aiyaroo, senior software engineer at Motorola, shows how tracking specific testing metrics can serve as key indicators of the health of testing and how these metrics can be used to improve testing. To improve your testing practices, find out how to track and measure phase screening effectiveness, fault density, and test execution productivity. Shalini Aiyaroo describes their use of Software Reliability Engineering (SRE) and fault prediction models to measure test effectiveness and take corrective actions. By performing orthogonal defect classification (ODC) and escaped defect analysis, the group has found ways to improve test coverage. CMMI® is a registered trademark of Carnegie Mellon University.

Shalini Aiyaroo, Motorola Malaysia Sdn. Bhd
Test Centers of Excellence: A Structured Approach for Test Outsourcing

While some outsourced test projects have delivered measurable business benefits, many others have not lived up to expectations. A new approach-Testing Centers of Excellence (TCOE)-can help outsourced test groups deliver improved business value by leveraging their work and work products across multiple client projects. Anand Iyer shares his insight on implementing client-focused TCOEs and analyzes the factors that influence success. Learn to objectively measure the potential benefits and real costs of test outsourcing to determine if outsourcing is providing business value. Find out how Testing Centers of Excellence can improve the ROI on testing whether you plan to outsource and not.

Anand Iyer, Infosys Technologies Ltd

Pages

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.