logical, not a physical object. Still on the physical side of things, you may want to audit that the file revisions that are claimed to be in the executable are indeed in the executable, and that the versions of the tools used to produce the executable were indeed used. This involves a bit of process and a means of automatic insertion of actual revision information into the software executables, as well as a means of extracting and displaying this information. Sometimes, the process that automatically inserts this information is audited to ensure its accuracy.
Similarly, for functional audits, the process used to generate and record the information to be used by the audit needs auditing itself. For example, how does the process that creates test cases for a requirement or set of requirements ensure that the test case coverage for the requirement is complete? Given that one can vouch for its completeness, it then remains a relatively simple task to ensure that each requirement, or if you like, each requirement sub-tree, is covered by test cases. In CM+, for example, you would right-click on a portion of the requirements tree and select "Missing Test Cases". Any requirements missing test case coverage are presented.
Making sure you have test case coverage is just one part of the audit. You also have to make sure that the test cases were run, and identify the results of the test cases. The "passed" test cases exercised against the particular "build" (i.e. identified set of deliverables) identify the functionality that has been successfully verified.
What Does It Cost
Traceability seems like a lot of extra effort. Does the payback justify the effort?
First of all, cost is irrelevant if the traceability data is incomplete or invalid. Here's where tools are important. Most of your traceability data should be captured as a by-product of efficient everyday processes. A checkout operation should have an update (i.e change package) specified against it. A new update should be created by a developer from his/her to-do list of assigned problems/features by right-clicking and selecting an "implement" operation. Problem reports should be generated in a simple action applied to the customer request, to which it would then be linked. It is important that your processes are defined such that all the necessary data is collected.
A good object-oriented interface can really make the process effortless, or even better! Traceability data should be generated wherever an action on one object is causing a new object to be created, or whenever an action on two objects specifies some common link between them.
The developer has to check out a file to change it, so why not select a current update from his list to give the change some context and to help in branch automation, with a by-product of traceability. It sure beats typing in a reason for the checkout or having to enter a problem number or task number, possibly foreign to the version control repository. It also means that down the road, the developer does not have to supply any missing information.
Still if you do not have a good starting architecture and toolset, the effort to support your process effectively, while gathering traceability information, could be painful and costly. Make sure you start with a flexible toolset which will let you change the data and processes to meet your requirements. Also make sure that you can customize the user interface sufficiently to cause your team members the least grief and the greatest payback.
In the end, your customers will be satisfied. Your team members will be satisfied. And you'll end up with better product.