building and testing. Again it is important for the traceability linkages to continue both up the V and across to the left side. However, when we also include the actual test results (as opposed to just the test suites) on the right side, we now hav specific identification of which requirements on the left side were verified as met by test results on the right side. This is, in software, the key component of the Functional Configuration Audit (FCA) as it verifies that the as-built product conforms to (or fails to conform to) the set of requirements.
With appropriate tools and architecture, it is also possible to embed the specific file revision identifiers and even the build tool revisions into the executables to help with Physical Configuration Audit (PCA), such as it may be for software.
Where do I need traceability? Quite frankly, trace whatever can be traced. Who did something? When? Under what authority? How was that verified? When and by whom? In what context? Here's a handful of things we like to track:
- What files were modified as part of a change or update? [We use the term update here to indicate that a single person has performed all the changes]?
- Which problem(s) and/or feature(s) were being addressed as part of the update?
- Which requirement or feature specification does each problem refer to?
- Which requirement does each feature specification correspond to?
- Which requirements were changed/added as a result of each feature request?
- Which customers made each feature request or problem fix request?
- Which baseline is a build based on? Which updates have been added to the baseline to produce the unique build?
- Which requirements are covered by which test cases? Which ones have no test case coverage?
- Which test cases were run against each build? Which ones passed? Which ones failed?
- How much time does each team member spend on each feature?
- Which features belong to which projects?
- Dependencies between features and between changes.
That's just a partial list. But it's sufficient to give some examples of the benefits of traceability.
One of the most common queries in our shop is: What went into a build? This may seem like an innocent question, or to some who know better, it may seem like a terribly complex question. When we ask what went into a build, we ask these questions:
- What problems were fixed?
- What problems are outstanding?
- What features have been added?
- What requirements have been met?
- Which updates (i.e change packages) went into the build?
- Which files were changed?
- How have run-time data configuration files changed?
Why is this so common a query in our shop? Well first of all our ALM toolset makes it easy to do. But secondly, when something goes wrong, we want to isolate the cause as quickly as possibly.
If a new delivery to a customer introduces a problem they didn't previously have, we ask: What changes went into this build as compared to the customer's previous release build? We then screen these based on the problem and more times than not isolate the cause quickly. Even more frequently, if the system integration team, or verification team, finds a problem with an internal build, we go through the same process and isolate the cause quickly.
By having a sufficiently agile development environment that packages a couple of dozen or so updates into each successive build, we're able to pinpoint new problems and hopefully turn them around in a few minutes to a few hours. Finally, we need to produce release notes for our customers. This type of query is the raw data needed by the technical writer to produce release notes. It's also the raw data needed by our sales team to