STARWEST 2012 - Software Testing Conference

PRESENTATIONS

Testing a Business Intelligence/Data Warehouse Project

When an organization builds a data warehouse, critical business decisions are made on the basis of the data. But how do you know the data is accurate? What should you test, and how? Karen Johnson discusses how to test in the highly technical areas of data extraction, transformation, and loading. Stored procedures, triggers, and custom ETL (extract, transform, load) transactions often must be tested before the reports or dashboards from a business intelligence (BI) project can be tested.

Karen Johnson, Software Test Management, Inc.
Testing in the Cloud: Policy, Security, Privacy, and Culture

Many organizations are evaluating and migrating toward cloud computing solutions. In 2012, the challenges are less technological, and more cultural and policy related. Steven Woodward shares the National Institute of Standards for Technology (NIST) Cloud Computing Reference Architecture that forms the foundation for many organizations’ cloud initiatives.

Steven Woodward, Cloud Perspectives

Testing Mobile Apps: Three Dilemmas Solved

The fragmentation and unpredictability of the mobile market present new challenges and risks for the business-and the development team. Testers must assure application quality across multiple platforms and help deliver new products almost every day. Using his experiences implementing automated mobile testing for clients, Yoram Mizrachi analyzes three fundamental mobile testing dilemmas encountered when enterprises go mobile.

Yoram Mizrachi, Perfecto Mobile
Testing Requirements in Motion

Regg Struyk of Polarion speaks on how to maximize collaboration with QR technology, integrating QR into your Daily Standup Meeting and how to execute Test Runs using QR Codes

Regg Struyk, Polarion

Tests and Requirements: You Can't Have One without the Other

The practice of software development, including agile, requires a clear understanding of business needs. Misunderstanding requirements causes waste, missed schedules, and mistrust within the organization. A disagreement about whether or not an incident is a defect can arise between testers and developers when the cause is really a disagreement about the requirement itself. Ken Pugh describes how you can use acceptance tests to decrease this misunderstanding of intent.

Ken Pugh, Net Objectives

The Art of Designing Test Data

Test data generation is an important preparatory step in software testing. It calls for a tester’s creativity as much as test case design itself. Focusing on the type of testing to be performed and designing data to support it yields the greatest success in finding defects. For example, security testing largely requires negative test data to attempt to gain access to a system as a hacker would. Localization testing requires very specific test data in the areas of date, time, and currency.

Rajini Padmanaban, QA InfoTech

The Dangers of the Requirements Coverage Metric

When testing a system, one question that always arises is, “How much of the system have we tested?” Coverage is defined as the ratio of “what has been tested” to “what there is to test.” One of the basic coverage metrics is requirements coverage-measuring the percentage of the requirements that have been tested. Unfortunately, the requirements coverage metric comes with some serious difficulties: Requirements are difficult to count; they are ideas, not physical things, and come in different formats, sizes, and quality levels.

Lee Copeland, Software Quality Engineering

The Many Flavors of Exploratory Testing

The concept of exploratory testing is evolving, and different interpretations and variations are emerging and maturing. These range from the pure and original thoughts of James Bach, later expanded to session-based exploratory testing by Jon Bach, to testing tours described by James Whittaker, to the many different ways test teams across the world have chosen to interpret exploratory testing in their own contexts.

Gitte Ottosen, Sogeti Denmark

The Metrics Minefield

In many organizations, management demands measurements to help assess the quality of software products and projects. Are those measurements backed by solid metrics? How do we make sure that our metrics are reliably measuring what they're supposed to? What skills do we need to do this job well? Measurement is the art and science of making reliable and significant observations. Michael Bolton describes some common problems and risks with software measurement, and what we can do to address them.

Michael Bolton, DevelopSense, Inc.

The Missing Integration at Best Buy: Agile, Test Management, and Test Execution

What can you do when test tools from proprietary vendors don’t seem to support your organization’s processes and open source tools are too narrowly focused? Best Buy, the world's largest electronics retailer, faced this very situation. With hundreds of agile development projects running concurrently, they needed an integrated test management and test execution tool set that would scale up easily.

Frank Cohen, PushToTest

Pages

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.