Conference Presentations

Automated Database Testing with NUnit

With a framework built in .NET using the open source application NUnit, database application developers and testers quickly can create a basic set of build verification tests and provide a foundation for a set of more powerful tests. Alan Corwin demonstrates the framework in the context of a fully functional Web site and offers a brief history of how his team developed it to show how they came to introduce automated testing into their development process. Learn what problems they encountered, how they overcame them, and the value of this framework to the team. NOTE: Those with some knowledge of Microsoft's .NET framework, a .NET programming language, and object-oriented programming will get the most out of the advanced parts of the presentation.

  • How to use NUnit, an open source test harness for .NET
  • abstract test classes
  • Increase quality and shorten development time with this framework
Alan Corwin, Process Builder, Inc.
Quality Metrics for Test: Evaluating Products, Evaluating Ourselves

As testers, we usually focus our efforts on measuring the quality of products. We count defects and organize them by severity, we compute defect density, we examine the changes in those metrics over time for trends, and we chart customer satisfaction. While these are important, we must apply additional measurements to ourselves if we are to reach the next level of testing maturity. Lee Copeland suggests that we (1) count the number of defects in our test cases and the time to find and fix them; (2) compute test coverage, a measure of how much of the software we have exercised under test conditions; and (3) determine Defect Removal Effectiveness, the ratio of the number of defects we find divided by the total number we should have found. Start keeping these and other metrics, and we are on the way to improving our testing processes and results.

Lee Copeland, Software Quality Engineering
Combinatorial Testing Experiences, Tools, and Solutions

Good test designs often require testing many different sets of valid and invalid input parameters, hardware/software environments, and system conditions. This results in a combinatorial explosion of test cases. For example, testing different combinations of possible hardware and software components on a typical PC could involve hundreds or even thousands of possible tests. The classic question for effective testing is always, "Given limited time and resources, which of the combinations should be tested?" Peter Zimmerer describes the underlying challenges in test case design for combinatorial testing and solutions using orthogonal arrays and all pairs test techniques. From Peter’s experiences learn about both free and commercial tools, such as AllPairs, Jenny, Pro-Test, and Telcordia AETFWEB, to support these methods and lessons.

  • A design dilemma due to the combinatorial explosion of test conditions
Peter Zimmerer, Siemens AG
STARWEST 2004: Model-Based Testing for Java and Web-Based GUI Applications

With the tools existing today, model-based testing for Java applications is extremely difficult to implement. According to Jeff Feldstein, you need a scripting language that allows for creating and manipulating complex data structures and driving your tests with models of the application. Learn about Jeff's success and the obstacles he faced implementing model-based testing for Java and HTML applications. During the presentation, Jeff demonstrates the use of XDE Tester's ScriptAssure and Java to create an HTML application model and shows examples of the programming required for model-based testing. In this model-driven approach, you will see how changes in the user interface do not require changes to the tests.

  • Ways to implement the required data structures in Java for modeling
  • What to avoid in creating the models
  • How to automatically adapt test cases to changes in the application's GUI
Jeff Feldstein, Cisco Systems Inc
Testing and Thriving in an FDA Regulated Environment

As for all life-critical software, the FDA guidance document on software validation emphasizes defect prevention, complexity analysis, risk assessment, and code coverage. Additionally, all software changes must be managed carefully and tested extensively. Based on his many years of experience testing biotech products, pharmaceuticals, medical devices, and various healthcare systems, Jim Bedford discusses the practical software tools and practices he has used to meet these stringent expectations. As a first line of action, Jim recommends implementing automated coding scans to verify that development consistently follows standards and recommended best practices. Further, measuring code complexity and path analysis provides a way to quantify risk and design corresponding test plans.

Jim Bedford, Metreck Corporation
Improving Testing with Process Assessments

Fast development cycles, distributed architectures, code reuse, and developer productivity suites make it imperative that we improve our software test efficiency. A process assessment is one approach to begin an improvement program.

  • What process assessments are available?
  • How do you conduct an assessment?
  • How do you guard against incorrect information?
  • How do you know what to improve first?
  • How can you make successful improvements without negatively impacting your current work?

Learn the answers to these questions and more from Intel's experiences using the Test Process Improvement (TPI?) model as a basis for assessments. See example scores, improvement suggestions, and adopted actions. Hear about the high points and low points of using this process, and take away a comparison of the TPI? model with the CMMI Level 3 key process area.

Robert Topolski, Intel Corporation
Free Test Tools are Like a Box of Chocolates

You never know what you are going to get! Until you explore, it can be hard to tell whether a free, shareware, or open source tool is an abandoned and poorly documented research project or a robust powerhouse of a tool. In this information-filled presentation, Danny Faught shows you where open source and freeware tools fit within the overall test tool landscape. During this double session, Danny installs and tries out several tools right on the spot and shares tips on how to evaluate tools you find on the Web. Find out about licensing, maintenance, documentation, Web forums, bugs, and more. Discover the many different types of testing tools that are available for free and where to find them. Danny demonstrates examples of tools that you can put to use as soon as you get back to the office.

Danny Faught, Tejas Software Consulting
Using Personas to Improve Testing

Too often testers are thrown into the testing process without direct knowledge of the customers' behaviors and business process. As a tester, you need to think and act like a customer to make sure the software does-in an easy-to-use way-what the customer expects. By defining personas and using them to model the way real customers will use the software, you can have the complete customer view in designing test cases. Get the basics of how to implement customer personas, their limitations, and ways to create tests using them. See examples of good bugs found using personas while learning to write bug reports based on them.

  • What you need to know to develop customer personas
  • Use customer personas for designing test cases
  • The types of bugs found by using personas but missed by other techniques
Robyn Edgar, Microsoft
Managing Agile Test Departments

What is the impact of agile methods on test departments and testers? How do you manage testing in an agile test department? Robert Martin, an early adopter and proponent of agile development practices, discusses his experiences and recommendations for how to organize and run an agile test department. He describes the principles, practices, tools, and metrics that are important to successful test management within agile development. Agile methods change the role of test departments from verification to specification. With agile methods, you develop tests before the code, and the tests become the detailed requirements documentation. This paradigm shift has a profound impact on both the test team and the programming team. Learn about the test management problems that often arise in making the transition to agile development and common solutions that address these issues.

Robert Martin, Object Mentor
Mainframe-Class Recoverability Testing

The corollary to the axiom "all software has bugs" is "you will never find them all." Even if you could, hardware and environmental failures always are lurking about, waiting to crash the software. If you accept the premise that failures are inevitable, then part of your testing should confirm that the software gracefully recovers from failures, protecting customer data and minimizing downtime. In this presentation Scott Loveland helps you face the issue head-on by explaining novel ways to force failures and then test the software's ability to recover. Having spent his career with IBM in test for z/OS and its predecessors, MVS and OS/390, and most recently Linux, Scott reveals the tools and techniques proven for testing recoverability of industrial-strength software in the trenches of the IBM mainframe development lab.

  • Methods for injecting errors and monitoring recovery of large, complex systems
Scott Loveland, IBM Corporation

Pages

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.