STARWEST 2002 - Software Testing Conference

PRESENTATIONS

By
Earl Burba and Jim Hazen, SysTest Labs

As a tester, you're often asked how far along your testing effort is, and when it will actually be done. This is one of the most difficult-and nerve-wracking-questions to answer, especially when a project has just begun or is nearing completion. While a tool is what's needed to help gather information and effectively answer this inquiry, many companies cannot afford to purchase or implement a complex, commercial tool. But there is a solution available in commercial spreadsheet products, particularly Microsoft's Excel.

By
Darin Magoffin, Todd Hovorka, and Rich Wolkins, PowerQuest Corporation Inc

Interested in seeing a real test automation solution in action? Automated testing is an exciting thing to be part of, but automating the automation is even better. This session presents a system where the test case/automation system is set in motion after configuration management builds a piece of software for a project in which test has been automated. This means thousands of preprogrammed test cases can be run on multiple machines day and night.

By
Karol Vastrick, Federal Express

The emerging discipline of project management within the information technology arena can be a major move toward your testing organization accomplishing its stated goals. That's because effective project management leverages the best practices of quality control and quality assurance, with the basic principles of a sound project strategy. This means working toward goals in an organized way, and using a road map that integrates test project management into the organization.

By
Rex Black, Rex Black Consulting Services, Inc.

You're a software tester who's just been given a new project. You understand what's important to the customers, users, and other stakeholders in the new application, so designing and implementing your tests are no problem. The difficulty arises when your boss asks when testing will be completed. Just how do you develop realistic and practical estimates of test completion? More importantly, how can you intelligently respond when someone suggests cutting the test schedule?

By
Ravindra Velhal, Intel Corporation

The emergence of the handheld platform is an exciting opportunity to reapply quality and usability paradigms. It gives us the chance to establish new, industrywide quality benchmarks for handheld applications that may propel society beyond the traditional human-machine interface. Handheld-based computing has its potential-and its limits. When moving from desktop-centered quality assurance to handheld-centered applications, there will be changes that affect software testing techniques. We must be prepared.

By
Naomi Mitsumori, IBM Global Services

The IBM Global Testing Organization's performance test infrastructure is solely responsible for certifying the performance of all IBM enterprise Lotus Notes and Web applications before their deployment to end users. Naomi Mitsumori describes this infrastructure and provides insights into designing the appropriate test environment, how performance and monitoring tools should be selected, and the management style necessary for success.

By
James Lyndsay, Workroom Productions

Many projects' first test approaches are characterized by uncontrolled, ad hoc testing. Session-based testing can help you manage unscripted, reactive testing. By using sessions to control and record the work done by the test team, you can use these methods to support and give impetus to ongoing learning and team improvement. You'll be introduced to simple tools and metrics to support test sessions, illustrated by real-world examples from two case studies.

By
Vinay Pai and Arun Gupta, Sun Microsystems

Testing system-level components such as the Java API for XML-Based Remote Procedure Calls is a challenging task. Employing use-case techniques from the Unified Modeling Language (UML), Vinay Pai describes a novel approach for testing such components. His team developed use cases for a realistic application that would use the components, then developed test case designs from those use cases. The resulting test suite uncovered more than 200 defects in eight months, and exceeded code coverage goals by almost 50 percent.

By
Suzanne Garner, Cisco Systems Inc

Test escape analysis and corrective action tracking (TEACAT) is a method used to collect and utilize information about the causes of test escapes to prevent customer-found defects and improve internal test, development, and release processes. The TEACAT approach provides testers and test managers with the primary causes of defect escapes from the organizations into the field.

By
Reginald Howard, Advanced Systems Integration Inc. and Jon Hawkins, Alliance Technical Solutions

Developing real-time, automated testing for mission-critical programmable logic controller (PLC)-based control systems has been a challenge for many scientists and engineers. Some have elected to use customized software and hardware as a solution, but that can be expensive and time consuming to develop.

Pages

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!