Commonsense CM Strategies to Meet Good Quality Requirements

[article]

In his CM: the Next Generation series, Joe Farah gives us a glimpse into the trends that CM experts will need to tackle and master based upon industry trends and future technology challenges.

Now if you happen to have an integrated tool suite that can tell you easily where you are in the project, what requirements have already been addressed or partially addressed, etc. it may give you some leverage in telling the customer that they should just wait for the next release for their change in requirements.  This is especially the case if you have a rather short release cycle.  An iterative agile process may proceed differently so as to maintain flexibility to the customer's feedback as they develop.  However, this is really just dealing with smaller requirement trees in each iteration. 

I would caution against dealing with one requirements tree per iteration however, as much advanced thinking is accomplished by taking a look at a larger set of requirements and after letting them soak into the brain(s) sufficiently, coming up with an architecture that will support the larger set of requirements.  If the soak time is insufficient, or the large set is unknown, it's difficult to establish a good architecture and you may just end up with a patchwork of design.

Whatever the case, if both customer and product team can easily navigate the requirements and progress, both relationships and risk management will benefit.   Make sure that you have tools and processes in place to adequately support requirements management.  It is the most critical part of your product development, culminating in the marching orders. 

Traceability to Test Cases and to Test Results
Test cases are used to verify the product deliverables against the requirements.  When the requirements are customer requirements or product requirements, the test cases are referred to as black box test cases, that is test cases that test the requirements without regard to how the product was designed.  Black box test cases can, and should, be produced directly from the product specification as indicated by the product requirements. 

When the requirements are system/design requirements, the test cases are referred to as white box test cases, because they are testing the design by looking inside the product.  Typically white box test cases will include testing of all internal APIs, message sequences, etc.

A CM/ALM tool must be able to track test cases back to their requirements.  Ideally, you should be able to click anywhere on the requirements tree and ask which test cases are required to verify that part of the tree.  Even better, you should be able to identify which requirements are missing test cases.  This doesn't seem like too onerous a capability for a CM/ALM tool, until you realize that the requirements tree itself is under continual revision/change control, as are the set of test cases.  So these queries need to be context dependent.

Going one step further, if the CM/ALM tool is to help with the functional configuration audit, the results of running the test cases against a particular set of deliverable (i.e., a build), need to be tracked as well. Ideally the tool should allow you to identify which requirements failed verification, based on the set of failed test cases from the test case run.  It should also be able to distinguish between test cases that have passed and those that have not been run.

More advanced tools will allow you to ask questions such as:  Which test cases have failed in some builds, but subsequently passed?  What is the history of success/failure of a particular test case and/or requirement across the history of builds?

With change control integrated with requirements management, it should be relatively straightforward to put together incremental test suites that run tests using only those test cases that correspond to new or changed requirements.  This is a useful capability for initially assessing new functionality introduced into a nightly or weekly build.

The ability to manage test cases and test results effectively and to tie them to requirements is will result in requirements which are of higher quality.  The feedback loop will help to ensure testability and will uncover holes and ambiguities in the requirements.

About the author

Joe Farah's picture Joe Farah

Joe Farah is the President and CEO of Neuma Technology and is a regular contributor to the CM Journal. Prior to co-founding Neuma in 1990 and directing the development of CM+, Joe was Director of Software Architecture and Technology at Mitel, and in the 1970s a Development Manager at Nortel (Bell-Northern Research) where he developed the Program Library System (PLS) still heavily in use by Nortel's largest projects. A software developer since the late 1960s, Joe holds a B.A.Sc. degree in Engineering Science from the University of Toronto. You can contact Joe at farah@neuma.com

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Oct 12
Oct 15
Nov 09
Nov 09