Conference Presentations

A Test Odyssey: Building a High Performance, Distributed Team

It seemed simple enough-hire the best available technical staff that would work from home to build some great software. Along the way, the team encountered the usual problems: time zone differences, communication headaches, and a surprising regression test monster. Matt Heusser describes how Socialtext built their high-performance development and test team, got the right people on the bus, built a culture of "assume good intent and then just do it," created the infrastructure to enable remote work, and employed a lightweight yet accountable process. Of course, the story has the impossible deadlines, conflicting expectations, unclear roles, and everything you'd get in many development projects. Matt shares how the team cut through the noise, including building a test framework integrated into the product, to achieve their product and quality aims.

Matthew Heusser, Socialtext
The Elusive Tester-Developer Ratio

Perhaps the most sought after and least understood metric in software testing is the ratio of testers to developers. Many people are interested in learning the standard industry ratio so that they can determine the proper size of their test organization. Randy Rice presents the results of his recent research on this metric and explores the wide range of tester-developer ratios in organizations worldwide. Learn why this metric is almost always not the best way to determine your test organization’s staffing levels and how to understand and apply this metric in more helpful ways. Find out how different tester-developer ratios relate to test effectiveness. Take away a new appreciation of your own tester-developer ratio and ways to meaningfully convey this metric to management to help rightsize your test team and improve the ROI of testing. Determine the "right ratio" of testers to developers in your team and company.

Randy Rice, Rice Consulting Services
Virtualizing Overutilized Systems to Eliminate Testing Constraints

Organizations currently are using virtualization in the test lab to eliminate underutilized systems such as physical computers and software. So why not virtualize the costly, overutilized, or completely unavailable elements of the software architecture that have serious access and data issues for testing? These elements required for realistic end-to-end testing-mainframe computers, production systems of record, and computing services hosted by other companies-are often difficult or expensive to access for testing. Rajeev Gupta explains how virtualizing these overutilized systems can make the constraints of capacity, test data, and availability for testing a distant memory. Discover how service virtualization, employed as an adjunct to hardware lab virtualization, eliminates the bottlenecks and data management efforts that stymie many test and development teams.

Ken Ahrens, iTKO
STAREAST 2010: Service-driven Test Management

Over the years, the test manager's role has evolved from "struggling to get involved early" to today's more common "indispensable partner in project success." In the past, when "us vs. them" thinking was common, it was easy to complain that the testing effort could not be carried out as planned due to insufficient specs, not enough people, late and incomplete delivery, no appropriate environments, no tools, tremendous time pressure, etc. Martin Pol explains how today's test managers must focus on providing a high level of performance. By using a service-driven test management approach, test managers support and enhance product development, enabling the project team to improve overall quality and find solutions for any testing problem that could negatively impact the project's success.

Martin Pol, POLTEQ IT Services BV
Test Automation Success: Choosing the Right People and Process

Many testing organizations mistakenly declare success when they first introduce test automation into an application or system. However, the true measure of success is sustaining and growing the automation suite over time. You need to develop and implement a flexible process, and engage knowledgeable testers and automation engineers. Kiran Pyneni describes Aetna’s two-team automation structure, the functions that each group performs, and how their collaborative efforts provide for the most efficient test automation. Kiran explains how to seamlessly integrate your test automation lifecycle with your software development lifecycle. He shares specific details on how Aetna’s automation lifecycle benefits their entire IT department and organization, and the measurements they use to track and report progress.

Kiran Pyneni, Aetna, Inc.
Automated Test Case Generation Using Classification Trees

The basic problem in software testing is choosing a subset from the near infinite number of possible test cases. Testers must select test cases to design, create, and then execute. Often, test resources are limited-but you still want to select the best possible set of tests. Peter M. Kruse and Magdalena Luniak share their experiences designing test cases with the Classification-Tree Editor (CTE XL), the most popular tool for systematic black-box test case design of classification tree-based tests. Peter and Magdalena show how to integrate weighting factors into classification trees and automatically obtain prioritized test suites. In addition to “classical” approaches such as minimal combination and pair-wise, they share new generation rules and demonstrate the upcoming version of CTE XL that supports prioritization by occurrence probability, error probability, or risk.

Peter Kruse, Berner & Mattner Systemtechnik GmbH
A Deeper Dive Into Dashboards

This session is a deeper examination of how to apply dashboards in software testing.I spent several months on a project primarily building a software testing dashboard. I have learned some interesting things, including:

  • Resources for free examples
  • Tools to help build dashboards
  • The human issues
Randy Rice, Rice Consulting Services
Lessons Learned from 20,000 Testers on the Open Source Mozilla Project

Open source community-based software development can be extremely wild and woolly. Testing in this environment is even more so, given that it is often less structured than software design and coding activities. What are the differences between testing open source and commercial or corporate applications? What can you learn from the open source community? Take a peek into the open source testing world with Tim Riley as he describes how the Mozilla Project develops and tests the Firefox browser. Tim describes how they monitor new builds, how people all around the world engage in testing, and how anomalies quickly bubble up to the release team. Although some of the tools they use may look familiar, how the Mozilla Project applies them will give you a fresh perspective. Find out how to apply the lessons learned at Mozilla to your projects and unleash the creative power of really smart people inside and outside your organization.

Tim Riley, Mozilla
The Buccaneer Tester: Winning Your Reputation

Who drives your career as a tester or test leader? Hopefully, not the company for which you work. It's you-you must be the driver. Because the craft of testing is still relatively free and open, there is no authority structure that defines or controls our industry. There are no generally accepted and standardized credentials that will admit you to the upper tier of income and respect as a tester. There are no universities that offer degrees in testing-although certificates and certifications abound. What we do have is a pastiche of communities, proprietary methodologies, schools of thought-together with ambitious individuals who write articles, teach, argue with each other, and speak at conferences.

James Bach, Satisfice, Inc.
Stop Guessing About How Customers Use Your Software

What features of your software do customers use the most? What parts of the software do they find frustrating or completely useless? Wouldn't you like to target these critical areas in your testing? Most organizations get feedback-much later than anyone would like-from customer complaints, product reviews, and online discussion forums. Microsoft employs proactive approaches to gather detailed customer usage data from both beta tests and released products, achieving greater understanding of the experience of its millions of users. Product teams analyze this data to guide improvement efforts, including test planning, throughout the product cycle. Alan Page shares the inner workings of Microsoft's methods for gathering customer data, including how to know what features are used, when they are used, where crashes are occurring, and when customers are feeling pain.

Alan Page, Microsoft

Pages

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.