systems are more interconnected these days. So you want this PDA to work with this office software, that communications software to use a particular set of protocols, etc. So the result is that one has an endless wish list of interface requirements, ordered by what the real market demand is.
And the reasons go on.
So what makes a good requirement? Sometimes it's easy. For example, a new C compiler might have a requirement that it compile all existing GNU C programs. Or a new communications system must be able to communicate with existing Bell trunk lines. An airplane's navigation system must be able to work with existing Air Traffic Control systems.
These easy requirements typically owe their ease of specification to the fact that there is an accepted standard already in place with which it must comply. Standards are really a type of requirement. A new product design may choose to have a specific standard compliance as a requirement, or not. The standards themselves often go through years
of multi-corporate evolution.
More generally, if you want a quality requirement, you really need to look at two things: (1) Can it be clearly and completely expressed? (2) Is it testable?
If you can take your requirements and write test cases for them, you're more than half way there. In fact, one of the benefits of standards is that they often have full test suites associated with them. And even if they don't, plugging them into the real world provides a very good test bed, when that can be done safely! This is closely related to the the
problem reporting axiom: Most of the work in fixing a problem is in being able to reproduce it. When you can reproduce a problem, you have both a clear specification of the problem, and a means of testing the fix. It's the same with requirements, express it and write a test case for it and you've don't most of the work, usually.
How Can We Help
So how can the CM/ALM profession help with producing quality requirements? By providing tools that manage change to requirements, and that manage test case traceability.
Let's take a look at the whole picture. This is what a typical requirements flow looks like.
Now generally there are two different ways of dealing with requirements. One is to call Customer and Product requirements "Requirements" and to call System and Design level requirements activities or design tasks. The input requirements from Product Management are known as the "Requirements" while those things that the Design Team has authority and control over are known as activities/tasks. In this scenario (shown in the diagram), "Requirements" is used to denote a set of Requirements on the Product Development team.
The other way of dealing with requirements is simply to treat requirements at different
levels, based on the consumer of the requirements. So a Customer Requirements tree is allocated to the next level as a Product Requirements tree, which is allocated to the next level as a System Requirements tree, which is allocated to the Design Requirements tree. Each level has a different "owner" and a different customer. The actual levels, and their names, may differ somewhat from shop to shop. But the traceability from implementation is from level to level. Each level must completely cover off the requirements of the preceding level.
I don't view these as two different ways of working - just two different ways of identifying requirements and design tasks. Both require full traceability. Both track the same information with respect to requirements. I prefer the former because the type of data object and authority exercised over Product Development Team tasks, is very different that that for Customer/Product requirements. While the Development Team may have some input and interaction with the Customer and Product Management in establishing the Customer/Product Requirements, it is there that a contractual agreement