Conference Presentations

Our Experience Using Orthogonal Defect Classification

Orthogonal Defect Classification (ODC) is a method of classifying and analyzing software defects. Using real-life experience, Barbara Hirsh discusses how Motorola successfully implemented ODC within their organization resulting in a framework for building a pervasive and cohesive defect prevention program. Learn the benefits of using ODC from the perspective of the developer, the tester, and the post-release analyst.

Barbara Hirsh, Motorola
A Comparison of IBM's and Hewlett Packard's Defect Classification

In this presentation, Jon Huber examines metrics obtained from categorizing the same set of defects using both IBM's Orthogonal Defect Classification and Hewlett Packard's Origins, Types, and Modes. Learn the pros and cons of each model, and how to apply the strengths from both models to create a method beneficial to software development and testing.

Jon Huber, Hewlett Packard
Estimating and Tracking Software Size without Lines of Code or Function Points

Sandee Guidry explains the processes that were used to effectively manage projects at Defense Financial Accounting System (DFAS). This presentation walks you through the process from when project requests are originated, through the analysis of requirements, the development of estimates, etc., to the delivery of the final project. Learn estimation methods and tools that were seamlessly integrated to deliver each project's committed functionality -- on time and on budget.

Sandee Guidry, DOD/DFAS/SEOPE
Software Sizing: There is an Easier Way

Project managers and software engineers need to accurately calculate delivery dates and resource needs for their software. This means they have to measure the size of the requirement, and estimate how much it will require in time and expense. But is there a sizing technique that's both effective and efficient? Popular sizing techniques such as the function point method can be difficult and labor intensive. However, there are alternative methods that produce quicker results, often without compromising accuracy. This presentation shares new ways to determine the size of your software deliverable while maintaining accuracy.

David Herron, The David Consulting Group
STAREAST 2001: Managing the End Game of a Software Project

How do you know when a product is ready to ship? QA managers have been faced with this question for many years. Using the methodology discussed in this presentation, you take the guessing out of shipping a product and replace it with key metrics to help you rationally make the right decision. Learn how to estimate, predict, and manage your software project as it gets closer to its release date. Learn how to define which metrics to track--and how to measure them. Discover how to define the ratings scale for each metric and how to create a spider chart for product readiness. This presentation is a must for any individual or organization that is serious about maximizing the results of positive events and minimizing the consequences of adverse ones.

Mike Ennis, BMC Software
Failure is Not an Option: 24 x 7 on the Web

This paper discusses the factors involved in determining the cost of a twenty-four hour by seven days per week (24 X 7) e-Commerce or internal web site going offline for any length of time. After determining these costs, and showing a real-life example calculation, the paper then goes into several ways to minimize this risk via hardware architecture, software architecture, and stress testing.

Ed Bryce, Reality Test
Problem Resolution Cycle Time Optimization

No matter how well we plan and execute software development, defects are generated and can escape to the customers. Failure to quickly resolve software problems leads to negative consequences for our customers and increases internal business costs. A quick deterministic
method to prioritize problems and implement their solution helps to reduce cycle time and costs. Achieving this goal requires several steps. The first is to determine a model that links problem resolution performance to institutional variables and problem characteristics. Statistical Design of Experiments (DOE) is a tool that provides data requirements for estimating the impacts of these variables on problem resolution. Once data has been gathered the results of statistical analysis can be input into a mathematical optimization model to guide the organization.
This paper describes such an analysis.

Don Porter, Motorola
Orthogonal Defect Classification at Cisco

This presentation outlines the history of the Orthogonal Defect Classification system deployment at Cisco.

Bob Mullen, Cisco Systems
Software Metrics State of The Practice

This presentation reviews the results of KLCI's Fourth Annual "best practices" study, including: Metrics "Best Practices"; Spending benchmarks for software metrics; Benefits of software metrics; Software measurements used; and Tools for software metrics.

Peter Kulik, KLCI Research Group
Communicate and Define the Value of Performance in Dollars and Cents

What is the real value of computing performance improvement? What is the real cost of computing performance degradation? This paper describes an approach used at The Boeing Company to answer these questions. The challenges of presenting technical analyses in "dollars and cents, bottom line" terminology, and sample visual formats for communicating computing performance information
clearly, completely and concisely will be discussed.

Nancy Acree, CAD/CAM Products and Services

Pages

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.