let prospects know what's in the new release, or what's coming down the pipe.
Customer requests provide useful input. In fact we find that customer input in a real world scenario is generally superior to the input gathered through other means for what should go into the next release of the product.
But managing customer requests is not a trivial task. Each customer request is traced to one or more features or problem reports. These in turn reference the product requirements from which they stem. Each development update must be traced to one or more problem reports or features. Each build is traced to a baseline and a set of updates applied to that baseline. Each test case is traced to the requirement(s) it addresses and each test run is traced to both the build being used for testing and the set of test cases that apply, and for each, whether they have passed, failed or not been run.
The traceability information allows the process to automatically update status information. When an update is checked in, the problem/feature state may advance. When a build is created and integration tested successfully, the problem/feature states may again advance, but so do the corresponding requests (possibly from multiple customers or internal) which have spawned the problems/features. When specific feature/problem tests are run against a build, the related requirements, features and problems are updated, as are the related requests. And this happens again when the build is delivered to the field and promoted to an in-service state.
Now with all of these states updated, it is a fairly trivial matter to produce, for each customer, a report that tells each customer the status of each request including clear identification of which ones have been implemented, verified, or delivered, as well as those which are still to be addressed. Customers appreciate this by-product of traceability.
A project is sometimes defined as a series of activities/tasks which transform a product from one release to another. In a more agile environment, a project might be split up into quite a few stepping stones. The tasks include feature development, problem fixes, build and integration tasks, documentation tasks and verification tasks.
If these tasks are traced back to the rest of the management data, it becomes a straightforward task for Project Management and CM/ALM to work together. Ideally, the features assigned to developers for implementation are the same object as the feature tasks/activities which form part of the work breakdown structure (WBS) of the project management data (see my June 2007 article).
This traceability makes it easy to identify each checkpoint of an activity, allowing accurate roll-up of project status, even without the detailed scheduling that is less present in a proirity-driven agile environment. In our shop, we go one step further and tie in the time sheets directly to the ALM tasks, again within the same tool. This allows for automatic roll-up of actual efforts, which in turn gives us better data on how well we did for planning the next time around, in addition to helping with risk identification and management.
Traceability allows this flow of status information to proceed. In some tools, this is a trivial task (especially if all of the component management functions are integrated into a single repository), while in others it is less trivial and requires some inter-tool glue. But it's the traceability that permits the automation of the function.
Auditing is the task of verifying that what you claim is accurate. If you claim that a product has certain features, a functional audit will verify that the product does indeed have those features. If you claim that a product will fits a particular footprint, a physical audit will verify this.
In software, there is a focus more on the functional configuration audit than the physical configuration audit . The reason for this is that software is a