Conference Presentations

Service-Oriented Architecture - Exposed

Service-Oriented Architecture (SOA), incorporating methods for Web services to communicate dynamically, promises to significantly improve organizational operating efficiency, change the way companies conduct business, and even alter the competitive landscape. However, Service-Oriented Architecture is a strategy rather than an objective, and, like any strategy, it is of no value unless it is implemented. With illustrations from companies who today are using SOA to transform their organizations, Sharon Fay shares current practices for exposing Web services and XML to internal development teams, outsourced development, external trading partners, and customers. Learn why reuse is a key method for supporting integration of SOA implementations and how it is being accomplished. Take away a set of metrics that you can use to measure the level of SOA adoption, development productivity gains, and organizational agility.

Sharon Fay, Flashline, Inc.
eXtreme Architecture and Design for Test

eXtreme programming emphasizes test-first coding-you write the tests before writing the implementation code. You can apply the same approach in design when developing a complex system, including an architecture to support testing. To be successful, systems developed with agile methods must support a high level of testability and test automation. For large distributed systems, more sophisticated testing is needed to help determine which components may be contributing to failures. For such complex systems, you should architect the system for testing rather than add testing functionality as an afterthought. Ken Pugh presents a framework that employs polymorphic-style internal and external interface patterns to ease the work of testing and debugging. He also covers adding test-only functionality, test-only outputs, and test-only logging to interfaces.

Ken Pugh, Pugh-Killeen Associates
Patterns for Writing Effective Use Cases

Use cases are a wonderfully simple concept: document a system's functional requirements by writing down scenarios about how using it delivers value to its actors. However, writing effective use cases is more difficult than expected because you frequently must deal with difficult questions, such as: scope, level of detail needed for different people and projects, how to describe external interfaces, stored data, and more. You need a source of objective criteria to judge use case quality and effectiveness. Fill this critical information gap with a pattern language that provides simple, elegant, and proven solutions to common problems in use case development. Take away these use case patterns and profit from the knowledge and experience of other successful use case writers. And develop a new vocabulary for describing the properties of quality use cases.

  • The "signs of quality" and properties of a good use case
Steve Adolph, WSA Consulting Inc.
Leverage Earned Value Management with Function Point Analysis

In the Earned Value Management (EVM) approach, as work is performed, it is "earned" on the same basis it was planned-both the original plan and agreed to changes. Today, more and more software projects are using this approach. Function Point Analysis has been shown to be a reliable method for measuring the size of computer software based on detailed requirements and specifications. Function points can be leveraged throughout the EVM process to establish cost and schedule baselines, control project scope over the lifecycle, and quantitatively assess percent complete. Ian Brown delves into the concepts of EVM as applied to software development and the key conditions necessary to profitably employ this management technology. Learn how companies are using function point analysis to improve the technology.

  • Earned Value Management applied to software development projects
Ian Brown, Booz Allen Hamilton
Software Test Automation Spring 2003: Mission Made Possible: A Lightweight Test Automation Experience

Using a challenging client engagement as a case study, Rex Black shows you how he and a team of test engineers created an integrated, automated unit, component, and integration testing harness, and a lightweight process for using it. The test harness supported both static and dynamic testing of a product that ran on multiple platforms. The test process allowed system development teams spread across three continents to test their own units before checking them into the code repository, while the capture of the tests provided automated integration testing and component regression going forward. He'll also explain the tools available to build such a testing harness and why his team chose the ones they did.

  • Examine the benefits-and challenges-of implementing an integrated, automated component and integration testing process in a Java/EJB development environment
Rex Black, Rex Black Consulting Services, Inc.
Home-Brewed Test Automatioin: Approaches from Extreme Programming Projects

Projects that use eXtreme programming (XP) often do not use commercial GUI test tools, finding it more useful to build their own support for test automation. This session explains the strategies they've used, which can actually cross over to any project where developers take responsibility for building support for automated testing. The XP community has already made an impact on the tools and practices for unit testing in the wider development community. The instructor reviews the potential impact on customer-perspective testing.

  • Share experiences in building in-house GUI test tools
  • How and when to build and use test APIs
  • Open-source tools to support these approaches
Bret Pettichord, Pettichord Consulting
Smaller-Scale Web Sites Need Performance Testing Too!

Even a smaller-scale Web site requires careful planning and execution of performance tests. Making the critical decisions in a timely manner and identifying the performance goals are still prerequisites to a successful test. However, smaller sites don't necessarily have the resources required to do large-scale testing, so compromises have to be made. This requires good test planning. The instructor explains the testing of a small site looking to grow, as well as the successes and pitfalls of achieving reasonable goals.

  • Define the test objectives; what's reasonable?
  • Plan the test then utilize tools, choices, and tradeoffs effectively
  • Apply and understand the results
Dale Perry, Software Quality Engineering
Fault Injection to Stress Test Windows Applications

Testing an application's robustness and tolerance for failures in its natural environment can be difficult or impossible. Developers and testers buy tool suites to simulate load, write programs that fill memory, and create large files on disk, all to determine the behavior of their application under test in a hostile and unpredictable environment. Herbert Thompson describes and demonstrates new, cutting edge methods for simulating stress that are more efficient and reliable than current industry practices. Using Windows Media Player and Winamp as examples, he demonstrates how new methods of fault injection can be used to simulate stress on Windows applications.

  • Runtime fault injection as a testing and assessment tool
  • Cutting edge stress-testing techniques
  • An in-depth case study on runtime fault injection
Herbert Thompson, Security Innovation
Automated API Testing: A Model-Based Approach

API testing is difficult, even with automated support. However, with traditional automated testing solutions, the cost to create and maintain a test suite can be more than the savings realized from automated test execution. By creating a model of the API to test and generating the test scripts automatically from the model, test automation becomes more cost-effective. Kirk Sayre describes how to create models of APIs; how to take the expected use of the API under test into account with Markov chains; how to augment the models with the information needed to generate automated test scripts; and how to use and interpret test results. You'll see concrete examples of automated model-based testing of APIs written in Java, PHP, and C.

  • Create a model of an API for use in model-based testing
  • The basics of testing using Markov chain usage models
Kirk Sayre, The University of Tennessee
Getting a Grip on Exploratory Testing

Many testers have heard about exploratory testing, and everyone does some testing without a script or a detailed plan. But how is exploratory testing different from ad-hoc testing? In this interactive session, James Lyndsay demonstrates the approaches to exploratory testing he often uses at work. With specially built exercises, he explains his thought process as he explores the application. He analyzes applications by looking at their inputs and outputs and by observing their behaviors and states. He employs both cultural and empirical models to establish a basis for observing whether a test succeeds or fails. Through this process, you will gain insights about how to improve your own exploratory style.

  • Using active play to parse and understand a sample application
  • Analysis of inputs, outputs, and their linkage to enhance explorations
James Lyndsay, Workroom Productions

Pages

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.