the criteria for approving a change to be that the change did what it was expected to do, and did not break any existing functionality. You can do this by having a codeline policy that requires that new code have:
- New Unit Tests
- Pass a workspace build. including unit tests
- Pass an integration build (including all unit and integration tests)
Each of these rules can also be validated through an automated process; test coverage tools can allow you "test" whether new code has unit tests, for example, testing not only functional compliance (the code still works based on the tests) but process compliance (the metrics that we consider important are also met). Continuous Integration: Improving Software Quality and Reducing Risk has excellent practical advice on how to use your build process to measure various quality and policy metrics.
Another difference between this point of view and traditional SCM is that traditional SCM is often "event-based", focused on baselines, individual changes, etc. Lean/Agile SCM is focused on managing the flow of change across the value stream. In an event-based model the fact that a developer made a change is of primary interest, and our infrastructure is focused on tracking (and perhaps preventing) the developers from making a change. In the model we're discussing here the item of interest is the impact that the developer's change had on the system, and we can use criteria like code quality to initiate action, rollback changes, etc. The difference is one of priorities and focus. We care about tracking the various events, as they allow us to recover when things start going in the wrong direction. But we want to report on the impacts, not simply the events.
Much has been written about the different kinds of tests: Functional Tests, Integration Tests, Unit Tests and who write them (developers, QA Engineers, etc), so we won't cover that here. It is important to understand that there are many places in the Software Development Ecosystem Timeline where testing happens and each has a impact on SCM.
- During Coding: software developers write unit tests for any changes/additions that they are making, and frequently run the unit tests for other parts of the code to ensure that their change did not break anything. This might also be a good time to extend the functional test suite. When a developer feels that their work is ready to share, she updates her workspace, does a final build and runs the unit test suite.
- Once code is submitted an automated integration build is run. This build might run all the unit tests as well as any integration tests.
- Periodically (nightly or more frequently as possible) longer running automated regression or functional tests are fun.
- As new featured appear on the scene, manual testing can happen for those features that do not yet have automated tests.
The "SCM is Testing" position also frustrates people because it seems to be placing an QA function (Testing) in the hands of another team (Release Engineering). To be able to successfully respond to change you need to forget about those boundaries, and think of SCM as being an element (perhaps a central element) of the software development environment.
As we mentioned above, there is a sense in some circles that testing is not relevant to the SCM community because testing is not part of build management or release engineering. The problem with this idea is that if SCM is about ensuring the integrity of the product, what other mechanisms do we have to do this? While we often speak in favor of cross functional teams