A Common Tool for an Uncommon Problem

[article]

created. The issue was editing the relationships between members and features or features and test cases. Two problems were identified; floating test cases and floating features. Floating test cases would be related to a feature that no longer existed or would lose the relationship all together. Floating features would be a feature with no test cases or even worse with test cases that no member was using. We added code in the interface to check for these instances. (We could have enforced this in the database.)

The GUI has a screen for adding and editing for each of members, features, and test cases. The editing screens for each of those allows for next, previous, move first, and move last. The opening screen allows for the operator to pick the database of choice. As the test cases and features were being added, we saw the need for a filter. So we added a filter screen where you could filter the data by member or by feature. The filter drove the way the data was seen by the different screens.

Future Enhancements
The first enhancements would be to either integrate this with the bug tracking database or create a table to represent each test pass through the test cases. Being statistically inclined, I already have three additional reports in mind. The first would be to report the encountered bugs versus the feature area. Are certain feature areas predisposed to having large numbers of bugs? Does this represent new technology or a highly critical area? The opposite is also true. This feature has been stable for the last "x" members. Should the QA group adjust their focus?

The next step would be to track if a member has bugs in an area where other members don't. This type of discovery could be representative of integration problems. Are we working with the same code base and version? Who is the configuration and/or compile manager, can this information lead to a quicker turnaround time? Is the test case clear and understandable? Is this reflective of tester error or interpretation? Maybe it's a training issue?

The last would be a selfish report. I would like to see a report based on test cases: completed / blocked / open. We could also have a baseline or the estimate being used as a guide. I am sure that a formula of some kind could create a testing dashboard using this information and be as current as almost live.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Sep 24
Oct 12
Oct 15
Nov 09