perhaps involving air worthiness and/or standards compliance. Others might simply need testing to the product requirements. There are a number of testing areas to consider. Your Plan must identify guidelines. Specify the testing areas and define their goals. Make sure that when the test architecture is laid out, that these goals are met. Adjust the levels/areas of testing as necessary to accomplish the goals you need. Spell out the goals in terms of the products your are producing. Some of the testing areas will be quite common while some may be very specific to your vertical or even to your application or system.
Unit testing: In software, unit testing usually refers to one of two things: verification of the functionality of an API (traditional meaning); or verification of the functionality of a change. The latter is the more frequent type of testing that a developer needs to be doing. Before check-in unit testing should be documented and completed successfully by the developer.
Sanity testing: When multiple changes are being integrated by multiple developers, ensuring that the resulting product has sanity is essential. One change can "break the build". Frequent sanity testing makes it easier to find out what change broke the build. Some applications can avail themselves to continuous integration and sanity testing. Others, require regular builds and sanity testing. The goal here is to ensure that the development test bed remains sane and that the product quality is not heavily impacted by the recent set of changes.
Integration testing: When the changes come together, the functionality that was provided by each of the changes needs to be tested out in the context of all of the other changes. This integration testing will typically address problems that have been fixed and features that have been added or changed. It may also focus on a quick and basic assessment of the overall product quality.
Regression testing: More often than not, a new set of changes is going to break some existing functionality. Software is notorious for this behaviour. Running a full set of regression tests is expensive at times, so it is important to identify the frequency and approach. Perhaps 80% of the tests can be automated and run relatively quickly - allowing a partial regression test with every build. Your plan must pay close attention here. You must ensure that you record test results against the build and that you can easily trace through these results and align them with the requirements to give you a clear requirements traceability matrix for both coverage and success/failure.
Your CM Plan must address how test cases are to be categorized, managed, and accessed. Change control may be much different than for source code as test cases typically are more "snippet" based and hence individual cases change less frequently. Instead, they are typically supplemented with new test cases to address new functionality or non-conformance issues.
Just the Start
There are a lot of areas I've not addressed. There are many things I said that others might be willing to contest. Great. We're putting a plan together to do the best we can across the Application Lifecycle Management process. Anyone with experience can add valuable input.
So this is a start. Your CM Plan is not going to name tools and processes as requirements. It may use some examples, but technology is constantly changing - your goals are what is important here. Aim high. If you're aiming lower because technology has to catch up, you've both missed the point of the Plan and have underestimated the available technology and people.
Should there be a common