then become part of the change package, a record which the developer can deal with as necessary. Again, communication is improved. Or the same form can be used in a one-on-one review.
Figure 2. A sample Update Review Station (from CM+)
Peer reviews are essential in agile shops because each iteration must maintain the stability of the software. Perhaps design managers will review their staff's changes. Perhaps a quality team will do an independent review. The key is to make is easy and presentable and to have the ability to capture comments in context. And the review should also cover the (change) unit testing performed by the developer. This can be facilitated by capturing the tests in the ALM tool, preferably in executable format so that they may evolve into test cases. The tests can be reviewed, and demos should be encouraged as part of the peer review where possible.
One more key capability that the ALM tool must provide as part of a review station is the ability to zoom into to the questions that come up during a review: what are the details of the task/problem from which this change results; where is this identifier referenced; when was this line or that line (not part of this change) added into the code, and why. These type of questions must be easily supported to help with the effectiveness of the review process.
7. What's In the Build
The ALM tool should also help everyone to identify what's in the build. What's going into the next iteration build or nightly build or integration build; what went into the previous build or the previous iteration build - these types of questions must be supported. And this needs to be available at various levels of detail. The most important capability is not what source code differences there are. But instead, which problems were fixed, which features implemented, which updates/changesets were added to the build.
What's in the build? This is a relative question. If I'm doing truly continuous builds, the latest update is in the build as compared to the previous. But more likely you'll want to ask: What broke this feature that was working in the last iteration? What change caused this feature to stop working for this customer? What features and problem fixes have to go into the next set of release notes?
One of the purposes of continuous builds is to help ensure that not too much changes between each tested build so that when a problem does occur it's easy to pinpoint the change that has caused it. That's good. Especially if you have an extensive set of automated tests that can be run on each build (e.g. nightly build).
But when a problem arises that is easily reproduced, but hard to diagnose, efficiency can be gained by allowing a preliminary view of the features, problems and updates that have gone into the failed build as compared to the last known working build. In such a case, it's possible to save days of work if your ALM tools can help you locate potential causes without having to do so at the source line level, or by using a debug session.
Of course, if we want to stress working software over documentation in an agile shop, it helps if the ALM tool can also produce the release notes for us, based on the traceability information and the build content. Imagine not having to create release notes - a reduction in documentation effort without a reduction in documentation.
8. Continuous Builds
Agile is not agile without continuous builds. Although there