When we deliver software products, we need to be able to tell our customers what they're getting. Not just product documentation, but specifically, every time we deliver a new release, what problems were fixed, and what are the new features. If the software is subject to periodic audits, we need to tell them even more, especially the abiltiy to trace a requirement or change request to what was changed.
And we do that very well. We point to the build that the customer currently has, and to the build that we're planning on shipping to the customer. Then we ask for a list of problems fixed, and a list of the new features, and any documentation available for the new features. In a few seconds we have our answer in a simple release notes document ready for the technical writer to spruce up.
This is good and useful. For many shops this might be a leap forward. But in the context of an ALM solution, it doesn't go far enough.
If my world is limited to taking a list of problems and features and fixing/implementing them and delivering builds with those implementations, the above release notes are helpful. However, if instead my world deals with taking a requirements specification, which is changing over time, and ensuring that a conforming product can be demonstrated to the customer, my world is a bit bigger. And if those requirements include a budget and delivery schedule, it's all the bigger. And if I happen to have one of those management structures that want to know the status of development, including, whether or not we'll meet our requirements within budget and on time, it's bigger still. If I have to track what customers have which problems and feature requests outstanding with each delivery, it's even bigger.
Many customers ask for much more than release notes. They want to know about their outstanding requests. They want to know about the quality level of the software we're shipping them. They want to know about the risks involved. They want to review the product specs before development is complete so that they may have additional input on the functionality. They want to know if our delivery schedule will be on time. We need a virtual maze of data to track the information involved in the ALM process (see diagram), and to support the audit process.
Traceability and Auditability
Traceability is the ability to look both at why something happened and the impact a request has on what had to happen. It's a two way street that allows me to navigate through the inputs and outputs of my process.
Auditability allows me to use the traceability to ensure that the requirements/requests have been met and to identify those non-conformances/deviations in the delivered product.
Traceability requires the providing of both the linkage structure and the data which ties one piece of data to another. At the simplest level, every time a change is done, the change (itself a first level object) is linked to the reason for the change (typically a problem report or a feature activity). In a requirements-driven process, each customer requirement is traced to/from a product requirement, which is traced to/from a design requirement which is traced to/from the source code change. In other shops, product requirements are traced to/from feature activities which are part of the WBS, and these are traced to/from the source code change.
The traceability means that I can go to the file revision and look at the change that produced it (another traceability step), and from there identify why the change was made, and hence why the file revision was created.
If we look at the V-model of development, the downward left side of the V deals with this development process, from requirements down to code. The upward right side of the V deals with