Conference Presentations

Measure Customer and Business Feedback to Drive Improvement

Companies often go to great lengths to collect metrics. However, even the most rigorously collected data tends to be ignored, despite the findings and potential for improving practices. Today, one metric that cannot be ignored is customer satisfaction. Customers are more than willing to...

Paul Fratellone, uTest
Non-Pathological Software Metrics

As semi-scientific software professionals, we like the idea of measuring our work. In some cases, our bosses like the idea much more than we do. Yet, meaningful software development metrics are notoriously challenging to define, and many people have given up trying because metrics often...

Stephen Frein, Comcast
The Dangers of the Requirements Coverage Metric

When testing a system, one question that always arises is, “How much of the system have we tested?” Coverage is defined as the ratio of “what has been tested” to “what there is to test.” One of the basic coverage metrics is requirements coverage-measuring the percentage of the requirements that have been tested. Unfortunately, the requirements coverage metric comes with some serious difficulties: Requirements are difficult to count; they are ideas, not physical things, and come in different formats, sizes, and quality levels. In addition, making a complete count of “what there is to test” is impossible in today’s hyper-complex systems. The imprecision of this metric makes it unreliable or even undefined and unusable. What is a test manager to do?

Lee Copeland, Software Quality Engineering
The Metrics Minefield

In many organizations, management demands measurements to help assess the quality of software products and projects. Are those measurements backed by solid metrics? How do we make sure that our metrics are reliably measuring what they're supposed to? What skills do we need to do this job well? Measurement is the art and science of making reliable and significant observations. Michael Bolton describes some common problems and risks with software measurement, and what we can do to address them. Learn to think critically about numbers, what they appear to measure and how they can be distorted. Improve the quality of the information that we're gathering to understand the relationship between observation, measurement, and metrics. Evaluate your measurements by asking probing questions about their validity.

Michael Bolton, DevelopSense, Inc.
Test Metrics in a CMMI® Level 5 Organization

As a CMMI® Level 5 company, Motorola Global Software Group is heavily involved in software verification and validation activities. Shalini Aiyaroo, senior software engineer at Motorola, shows how tracking specific testing metrics can serve as key indicators of the health of testing and how these metrics can be used to improve testing. To improve your testing practices, find out how to track and measure phase screening effectiveness, fault density, and test execution productivity. Shalini Aiyaroo describes their use of Software Reliability Engineering (SRE) and fault prediction models to measure test effectiveness and take corrective actions. By performing orthogonal defect classification (ODC) and escaped defect analysis, the group has found ways to improve test coverage. CMMI® is a registered trademark of Carnegie Mellon University.

Shalini Aiyaroo, Motorola Malaysia Sdn. Bhd
Choosing Effective Test Metrics

Every software project can benefit from some sort of metrics, but industry studies show that 80 percent of software metrics initiatives fail. How do you know if you've selected the right set of test metrics and whether or not they support your organizational goals? Alan Page offers methods for determining effective and useful test metrics for software quality and individual effectiveness and presents new studies showing correlation between certain metrics and post-ship quality. Alan provides examples of how commonly used metrics can be easily misused and offers helpful tips for implementing the right test metrics for your project and organization. Find out what can cause metrics projects to fail and what you can do to avoid being part of the 80 percent failure statistic.

Alan Page, Microsoft Corporation
Tips for Performing A Test Process Assessment

Looking for a systematic model to help improve testing practices within your team, department, or enterprise? Recently, Lee Copeland has led several, major test process assessment projects for both small and large test organizations. Whether you are chosen to lead an assessment project within your organization or just want to get better at testing, join Lee as he shares insights he has learned-beginning with the importance of using a proven assessment model. Lee discusses the pre-assessment preparation required, including reviewing documentation and choosing interview candidates, tips for interviewing using a questionnaire, analyzing the data you gather, writing an assessment report, and delivering your findings in a way that will be understood and acted upon.

Lee Copeland, Software Quality Engineering
Software Quality Metrics as Agents for Change

What is the purpose of software quality metrics and what values do they provide to the organization? What metrics not only report on and but also help drive changes and improvements in software quality? Based on his work at EMC, Jim Bampos discusses the metrics they use to predict software quality at ship time and the key quality questions to ask customers after ship. Find out what it takes to roll out a successful metrics program and the results you can expect, including quality ownership across the organization and improved customer satisfaction. Watch out for unintended consequences and wrong behavior that can result from a metrics program. Learn from Jim the key steps to ensure that your organization adopts the metrics program and that people are held accountable for the data and results.

James Bampos, EMC Corporation
Leading Cultural Change When Implementing Process Improvements

When we are part of an improvement initiative such as CMMI®, Six Sigma, or Agile practices, we often focus on the technical aspects and pay little attention to the people and cultural issues. Major change produces a significant disruption of expectations whether the change is perceived as positive or negative. So, you need a defined process to help ensure that your improvement initiative achieves its goals. Jennifer Bonine presents the Organizational Change Management (OCM) process to help you manage the human aspects of implementing major, complex changes. She describes eight human risk factors that can sabotage process improvement programs. Learn from Jennifer how OCM can help you deal with people’s reactions to change and provide you with a change implementation architecture.

Jennifer Bonine, Express Scripts
A New Paradigm for Collecting and Interpreting Bug Metrics

Many software test organizations count bugs; however, most do not derive much value from the practice, and other metrics can actually harm the quality of their software or their organization. Although valuable insights can be gained from examining find and fix rates or by graphing open bugs over time, you can be more easily fooled than informed by such metrics. Metrics used for control instead of inquiry tend to promote dysfunctional behavior whenever people know they are being measured. In this session, James Bach examines the subtleties of bug metrics analysis and shows examples of both helpful and misleading metrics from actual projects. Instead of the well-known Goal/Question/Metric paradigm, James presents a less intrusive approach to measurement that he describes as the Observe/Inquire/Model. Learn about the dynamics and dangers of measurement and a new approach to improve your metrics and the software you produce.

James Bach, Satisfice Inc


AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.