Conference Presentations

Managing Upward: Getting Approvals for the Tools You Need

Executive management does not like to spend money (on others); however, to build better software you may need to purchase better tools. Although skilled in producing code and running software tests, many development and QA managers do not have much experience preparing proposals and driving requests for funding through the management approval process. To gain their enthusiastic approval, you need insight into the executive heart and mind to better frame your proposal. Learn the decision-making process of executive managers, the facts they need to make a decision, and why they are reluctant to spend money even if it is in the budget. Build the case for your proposal in terms that match the business objectives of the CEO, CTO, CFO, and others with decision-making authority. Take away a template for a proposal along with examples of successful proposals, including visuals, data, competitive analysis, and much more.

Doug Smith, Aberro Software
Validation: What It Means in an FDA Regulated Environment

Even though formal validation may not be required in unregulated environments, many mission-critical applications could benefit from performing some of the same activities required for FDA regulated systems. Validation provides documented evidence showing, with a high degree of assurance, that a system will consistently meet its predetermined requirements. FDA validation is required if the use of the computer system could potentially impact product quality, safety, or efficacy, or if the system is used to support a regulatory submission function. Learn how validation is accomplished by looking at a series of qualification exercises typically prescribed in a Validation Protocol. Take back with you templates for a typical Validation Protocol, including the System Development Review, Installation Qualification, Operational Qualification, Performance Qualification, and Revalidation.

Chrys Kyee, Genentech Inc
Testing SOA Middleware: Automating What You Can't See

SOA projects employing Web services have multiple back-end integration points, volatile data, and no direct, visible user interface. Together, these factors make SOA applications complicated to test. Most companies opt for indirect manual testing because they see no other option. However, there is a light at the end of the tunnel. Using real-world case studies as examples, Jon Howarth reveals step-by-step the data driven model Wells Fargo employs to automate its SOA regression testing. Learn about the problems their QA groups encountered and the solution that drove down costs and cycle times while increasing quality. Find out how to staff and train your team to support the solution and see example metrics to measure its effectiveness. Learn how to combine vendor tools to optimize your SOA automated testing into an integrated test framework.

  • An automated testing strategy for SOA applications
Jon Howarth, Wells Fargo
Industry Benchmarks: Insights and Pitfalls

Software and technology managers often quote industry benchmarks such as The Standish Group's CHAOS report on software project failures; other organizations use this data to judge their internal operations. Although these external benchmarks can provide insights into your company's software development performance, you need to balance the picture with internal information to make an objective evaluation. Jim Brosseau takes a deeper look at common benchmarks, including the CHAOS report, published SEI benchmark data, and more. He describes the pros and cons of these commonly used industry benchmarks with key insights into often-quoted statistics. Take away an approach that Jim has used successfully with companies to help them gain an understanding of the relationship between the demographics, practices, and performance in their groups and how these relate to external benchmarks.

Jim Brosseau, Clarrus Consulting Group, Inc.
Successful Outsourcing with the Crawl-Walk-Run Strategy

Large organizations may have the resources for expensive, big-bang offshore outsourcing projects. But what should small- and medium-sized organizations do when tasked with outsourcing? Based on his experiences, Uttiya Dasgupta describes a usable and inexpensive process for planning an offshore outsourcing strategy for small- to medium-sized development organizations. This crawl-walk-run strategy starts with very small projects and moves to increasingly more complex ones, supported by adequate preparation for each stage. Beginning with a vision of the "run" stage, teams plan the first stages to test out processes and ensure the cultural and technology fit between the internal and outsourced organizations. Uttiya shares his insights for successful offshore outsourcing projects and, especially, the signs and metrics that tell you when you are ready to move from crawling to walking-to running.

Uttiya Dasgupta, Omnispan LLC
Process Improvement - Can I Make a Difference?

Although some organizations already have formal processes in place, many do not. Most process improvement begins with one person or one department deciding to do something rather than accepting the status quo. With the right attitude, some simple tools, and a proven method for improvement, you can make a difference for yourself, your department, and ultimately, your organization. Stephanie Penland has helped numerous small and large organizations with process improvement. Sharing her experiences-both successes and failures-Stephanie describes her real-world approach for process improvement. Find new ways to overcome obstacles and obtain buy-in from the top down. Learn what it takes to get a process improvement program off the ground. Take back with you a sample of a successful process improvement plan.

  • The benefits of process improvement and ways to measure success
Stephanie Penland, SAS Institute Inc
Managing Distributed Teams

Globalization, open source software, and cheap communications have forever changed the structure of software development project teams. Project managers face a new set of challenges with geographically distributed work teams. Unclear expectations, language and idiom differences, lack of direct supervision, and a lack of accountability are just a few of the issues that project managers must overcome. As the leader of a development team with members and customers all over the world, Keith Casey is intimately familiar with the character of distributed teams. He explains why you need a coherent strategy-and that means more than email, instant messaging, conference calls, and software tools-for effectively executing a distributed development project. Join Keith for a discussion of the strategies you can use to avoid the disasters awaiting those who ignore the needs of a distributed team.

Keith Casey, CaseySoftware, LLC
Beat the Odds in Vega$: Measurement Theory Applied to Development and Testing

James McCaffrey describes in detail how to use measurement theory to create a simple software system that predicts with 87 percent accuracy the results of NFL professional football game scores. So, what does this have to do with a conference about developing better software? You can apply the same measurement theory principles embedded in this program to more accurately predict or compare results of software development, testing, and management. Using the information James presents, you can extend the system to predict the scores in other sports and apply the principles to a wide range of software engineering problems such as predicting the Web site usage in a new system, evaluating the overall quality of similar systems, and much more.

  • Why the statistical approach does not work for making some accurate predictions
  • Measurement theory to predict and compare
James McCaffrey, Volt Information Sciences, Inc.
Smoke Tests to Signal Test Readiness

Plumbers use "smoke tests" to find leaks in pipes when it is impractical to completely seal a plumbing system. We use this term as a metaphor to define a small set of software tests designed to expose big problems instead of committing the resources to run a large suite of tests. Building a powerful smoke test suite is not trivial and not intuitive, requiring an understanding of the product, the test base, and test automation techniques. Join Aditya Dada to learn the attributes of a high quality smoke test suite and what it takes to build and, most importantly, maintain a small set of effective smoke tests. Improve nightly builds, speed up pre-integration testing, and catch huge defects quickly and efficiently with a new smoke test strategy for your software under test.

  • Benefits and attributes of smoke tests
  • How to manage a smoke test suite with minimum effort
  • Real world examples of smoke tests
Aditya Dada, Sun Microsystems Inc
A Metrics Dashboard to Drive Goal Achievement

Some measurement programs with high aims fall short, languish, and eventually fail completely because few people regularly use the resulting metrics. Based on Cisco Systems' five years of experience in establishing an annual quality program employing a metrics dashboard, Wenje Lai describes their successes and challenges and demonstrates the dashboard in use today. He shows how the metrics dashboard offers an easy-to-access mechanism for individuals and organizations within Cisco Systems to understand the gap between the current standing and their goals. A mechanism within the dashboard allows users to drilldown and see the data making up measurement to identify ownership of issues, root causes, and possible solutions. Learn what programs they implemented to ensure that people use the metrics dashboard to help them in their day-to-day operations.

  • How to build an effective metrics dashboard to help achieve quality goals
Wenje Lai, Cisco Systems Inc

Pages

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.