The following article has a description of the new and easy "Extract Database" QA methodology, which can help you create very sophisticated and accurate reports during the QA testing process.
Most of my entire computer-related experience has revolved around the software development area; for example, during the last 6 years prior to my current position, I spent time doing heavy database development and reporting in a major health insurance company. But almost three years ago I accepted a business delivery specialist position in the QA department. Most of the time in my current QA position I am still acting as a developer, creating new reports, some utilities for QA, etc.
At the beginning of my QA career I was assigned a project which updated the structure of our major DB2 tables and CICS screens. For these kinds of assignments it is very difficult to be able to compare base line batch to regression, especially if you have to run base line, then clean the environment and run regression using exactly the same record set. And I decided to create a new universal methodology would be easier than create one extremely complicate report.
In some systems it could be a problem to run some database reports; for example, comparing the base line to regression. First, we need to have both types of records in the same database, but it is difficult for some systems; our production batch could not process the same records twice and we have to purge the base line records before we start the regression. Second, a regular QA environment usually has millions of records and a limited number of indexes created for production purposes, and usually ineffective for testing. At the same time we also have to keep in mind: for testing purposes we only use a few hundred or a couple of thousand records, no more.
The main idea of our new methodology is very simple: isolate the test records from the QA environment and copy them into small separate databases using the same database structure. We able to rapid “Database Extract” few times, number of these small databases will be the same as the number of steps that we are planning to analyze.
For example: we have our main QA database in DB2, and as a small local database we are using MS ACCESS. There are many ways to extract data from the database. Our first extract process started from the mainframe batch. We are using a list of key records from the QA testing as an input for our COBOL program, and extracting only the test records from the DB2 tables affected by the project into flat files. Next we have to run the FTP and create flat files on the desktop. We should create a structure exactly similar to the DB2 database in the MS ACCESS database table then load our PC files into the MS ACCESS tables using the import command. Thus we are able to create any number of the extracted tables for each test step.
When we have a copy of the base line and regression data in our small database, it is possible to delete or update any records in the QA DB2 environment with no risk of losing important information.
Then, using the SQL query in MS ACCESS, we are able to create almost any unique report, including comparing the base line to regression (what is impossible in our DB2), because we do not need indexes for small amounts of data. Also, reports could be submitted for any test step without a time-consuming stop the QA batch run.
This was an explanation of a simple “Database Extract” process. In real life the process could be more complicated. For example it might be more convenient to prepare all the major tables and reports before starting testing project steps: creating table structure and SQL query in an empty MS ACCESS database. Now after all steps described above done, you will automatically have all the major reports. This could save you large amounts of time and resources.
Our "Extract Database" methodology is already simple and easy to use, but even today I can already see many different ways to improve it. Our example was written based on DB2 and MS ACCESS, but you folks can use any professional database as your tool (SQL Server, SYBASE, ORACLE, IDMS, FOXPRO etc) which could let you load and operate more test records and create more sophisticated and infallible reports.
In our example we used the batch and FTP processes because it was easy to descriptively explain all the details of the "Extract Database" method, but I already have this process created based on ODBC connectivity. This new method is faster, more universal and doesn't require so much developer support.
Now folks, I have to tell you: I appreciate the time you spent reading my article, and you can feel free to contact me if you have any questions.