Building an environment to successfully test wireless intelligent network peripherals presents an array of complex problems to resolve. The target environment integrates various SS7 protocols, a proprietary protocol, and voice recognition subsystem--and requires a controlled and synchronized test environment. Learn how a test automation approach allows the software engineer control over the peripheral interfaces and provides for the testing of the entire call flow sequence, its initiation and consequential message traffic. Discover how this approach provides for function testing as well as scalability for automated performance, load, and stress testing.
This paper is based on a recent experience implementing and testing a large new software capability in a maintenance organization which had not dealt with a large change in some time. The capability was called GPC Payload Command Filter (GPCF). While the task was completed successfully, it was not without cost in terms of schedule slips and personal angst. The purpose of this paper will be to help the verifier learn from what was done right and what was done wrong, hopefully to avoid the pitfalls and emulate the successes. Specifically, the objective is as follows:
To provide guidance on how to successfully test a large new software capability using verification processes which have specialized over time to provide extremely effective results
for relatively small changes.
This presentation relates a software test lab's real-world experiences performing load testing for scalability on three Web sites. Besides methodology, it also covers the tools employed, client expectations before launch, and how the findings from the testing were applied to help clients correctly scale their sites. Learn why this type of testing is the most effective way to validate design and hardware architecture, plus identify potholes before they end up on the information superhighway.
Many companies invest heavily in test automation in order to verify the functionality of their complex
client/server and Web applications, only to find that anticipated cost savings and higher reliability remain
elusively out of reach. This paper is a guide on how to create Table Driven Test automation with off-the-shelf utilities and commercially available GUI testing tools. It demonstrates the benefits of using a table driven approach and presents various engines, utilities and documents that enhance or support this third generation testing
architecture, which I call Enterprise Test Engine Suite Technology (E-TEST).
The key to accelerating test automation in any project is for a well-rounded, cohesive team to emerge that can marry its business knowledge with its technical expertise. This session is an in-depth case study of the evolution of automated testing at the BNSF Railroad. From record-and-playback to database-driven robust test scripts, this session will take you through each step of the $24 billion corporation's efforts to implement test automation.
Many corporations are now using Java technologies to deliver mission-critical eBusiness applications for both the intranet and Internet. To better understand how the applications will scale (or perform), this presentation provides you with a systematic process for testing, measuring, and improving performance. Find out what you need to know to property identify and eliminate bottlenecks and ensure optimum performance.
Large application services are very dynamic in their functionality, with some of the business rules hosted by these services changing on a daily basis. This presentation discusses one company's experience in developing a new methodology and test infrastructure for automated testing and nonstop QA monitoring of large application services with high requirements churn. Learn how this method allows you to get a handle on quality even though the application services requirements remain a moving target.
Ashish Jain and Siddhartha Dalal, Telcordia Technologies
What is the real value of computing performance improvement? What is the real cost of computing performance degradation? This paper describes an approach used at The Boeing Company to answer these questions. The challenges of presenting technical analyses in "dollars and cents, bottom line" terminology, and sample visual formats for communicating computing performance information
clearly, completely and concisely will be discussed.
All projects involve the three P's: people, process, and product. People includes everyone who influences the project. Process is the steps taken to produce and maintain software. Product is the final outcome of the project. To keep these three in harmony, you must observe who is trying to do what to deliver what. Usually, two of the three P's are mandated, and the third one is chosen appropriately. Although this is common sense, it is not common practice. Dwayne Phillips discusses the issues and challenges that affect us all on every project. Learn about the ideas and questions to consider to help you work through these issues.
Estimating productivity (e.g., lines of source code developed per hour) and quality (e.g., code defect rates) are difficult on large software projects that involve several companies or sites, emphasize reuse of Commercial-Off-The-Shelf (COTS) components or adaptation of legacy code, and require open architectures. Using actual metrics from such software development projects, this paper illustrates problems encountered and lessons learned when measuring productivity and quality. These include: how to count different types of code; effects of lengthy development times on productivity/quality; variability
between estimates obtained from different models; and tracking and reporting metrics on productivity/quality for projects based on incremental or evolutionary development.