"Extract Database" QA process

[article]
Member Submitted
Summary:

The following article has a description of the new and easy "Extract Database" QA methodology, which can help you create very sophisticated and accurate reports during the QA testing process.

Most of my entire computer-related experience has revolved around the software development area; for example, during the last 6 years prior to my current position, I spent time doing heavy database development and reporting in a major health insurance company. But almost three years ago I accepted a business delivery specialist position in the QA department. Most of the time in my current QA position I am still acting as a developer, creating new reports, some utilities for QA, etc.

At the beginning of my QA career I was assigned a project which updated the structure of our major DB2 tables and CICS screens. For these kinds of assignments it is very difficult to be able to compare base line batch to regression, especially if you have to run base line, then clean the environment and run regression using exactly the same record set. And I decided to create a new universal methodology would be easier than create one extremely complicate report.

Introduction
In some systems it could be a problem to run some database reports; for example, comparing the base line to regression. First, we need to have both types of records in the same database, but it is difficult for some systems; our production batch could not process the same records twice and we have to purge the base line records before we start the regression. Second, a regular QA environment usually has millions of records and a limited number of indexes created for production purposes, and usually ineffective for testing. At the same time we also have to keep in mind: for testing purposes we only use a few hundred or a couple of thousand records, no more.

Body
The main idea of our new methodology is very simple: isolate the test records from the QA environment and copy them into small separate databases using the same database structure. We able to rapid “Database Extract” few times, number of these small databases will be the same as the number of steps that we are planning to analyze.

For example: we have our main QA database in DB2, and as a small local database we are using MS ACCESS. There are many ways to extract data from the database. Our first extract process started from the mainframe batch. We are using a list of key records from the QA testing as an input for our COBOL program, and extracting only the test records from the DB2 tables affected by the project into flat files. Next we have to run the FTP and create flat files on the desktop. We should create a structure exactly similar to the DB2 database in the MS ACCESS database table then load our PC files into the MS ACCESS tables using the import command. Thus we are able to create any number of the extracted tables for each test step.

When we have a copy of the base line and regression data in our small database, it is possible to delete or update any records in the QA DB2 environment with no risk of losing important information.

Then, using the SQL query in MS ACCESS, we are able to create almost any unique report, including comparing the base line to regression (what is impossible in our DB2), because we do not need indexes for small amounts of data. Also, reports could be submitted for any test step without a time-consuming stop the QA batch run.

This was an explanation of a simple “Database Extract” process. In real life the process could be more complicated. For example it might be more convenient to prepare

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Nov 09
Nov 09
Apr 13
May 03