and operations people are loath to spend their time working on something that will result in more changes when they are already swamped. However, we have tried to make it clear that it doesn't have to be a zero-sum game: if people release more frequently and do so from the beginning of projects, the deltas will be very small, the deployment process will have been tested much more frequently, and everybody will have much more feedback on the production readiness of their application from the beginning of development, so the risk of each individual deployment becomes much, much lower.
STL: What is the value of being able to define/quantify risk when delivering software, and what are some techniques or tools for doing so?
JH: Understanding what contributes to risk in your delivery process is what allows you to mitigate them. If you can reduce risks, you can save money. In particular, the main risks we address in the book are that of doing unplanned work when you discover your software is not fit for use and the risk of complex, manual deployment processes that can lead to panic, roll-back, late nights, and downtime. There's also an opportunity cost for IT from having to make your software deployable when you could instead be working on new features and, of course, an opportunity cost to the business from not having software live because it takes weeks to release a new version.
Really, the techniques all rely on the same basic principle-if it hurts, do it more often and bring the pain forward. Is it painful to deploy? Then deploy continuously from the beginning of your project, and deploy the same binaries using the same deployment process to every environment so you've tested the deployment process hundreds of times by the time you come to deploy to production. Is it hard to make high quality software? Then build quality in by testing throughout the project, which includes automated unit tests, acceptance tests, component tests, and integration tests, along with manual testing such as exploratory testing, showcases, and usability testing.
In terms of tools, there are really three kinds of tools you need: a good version control system (such as Subversion, Perforce, Git or Mercurial), some effective testing tools (such as xUnit for unit and component tests), a tool for automated acceptance tests (such as Twist, Cucumber, or Concordion along with WebDriver, Sahi, or White), and a continuous integration and release management tool (such as Cruise, TeamCity, or Hudson).
While I am biased because I am the product manager, Cruise (the commercial tool from ThoughtWorks Studios) is designed from the ground up to enable you to implement deployment pipelines. The forthcoming 2.0 version includes powerful features such as the ability to model your environments so you can deploy any version of any application to any environment and manage multiple services that share the same environment (e.g., integration testing with a SOA), and the ability to distribute your tests on the build grid and have Cruise tell you which tests failed, and (if some tests have been failing for a while) which check-in broke each test and who was responsible.