issue. The pain of updating a test database to reflect the production database tends increase the likelihood that they drift "out of sync" leading to a variety of errors which really should be caught much earlier. The SCM implication is simply that test data and resulting frameworks need to be version-controlled along with the code.
Embrace Collective Ownership
Collective ownership both reduces the risk of the "key team member falling under the bus," and improves quality through practices such as paired programming. It diffuses knowledge throughout the team and avoids isolated silos of knowledge.
This may also encourage a matching SCM attitude that everybody does SCM on a project. Thus the SCM person needs to become a focused contributor to the team rather than an enforcer (and not just "do" SCM but also code etc).
In our experience, this makes for a more enjoyable job, but we've come across quite a few SCM people who like to sit in their little dungeons making occasional forays out to frighten the natives a bit. Being "in the trenches" will also highlight the appropriate automation necessary to make things faster and easier.
Another related aspect of this is the need for the team to become fully conversant with SCM and comfortable with their tool and how to do things like branching and merging - which many are somewhat reluctant to do. Too many bad experiences in the past perhaps? This is where we can break down some of the barriers between SCM and development.
SCM Concerns for Agile Development
There are some areas where agile proponents view SCM with suspicion, and consider the activities to be "high ceremony and low value". Let us consider several of these of these.
What about the Cost of Deployment/Upgrade?
The flattened cost-of-change curve is all well and good when it is dominated by the cost of making development changes. But what if it is dominated by the cost of deploying and upgrading a change? Suppose it’s easy to make the change, but expensive to deploy it to every applicable instance of the software stored on an end-user’s device? This might be the case if:
- There are corresponding hardware changes required with the software change
- The system is a highly-available one and there is an associated system down-time where the system cannot be available for all or part of the upgrade.
- The system is deeply tied to and/or used for significant business process workflow or decision-making-support, and an upgrade/downtime requires new training to be developed and users to be re-trained.
In each of these cases, there are no "pat" easy answers. We can try and apply lean thinking and agile principles to mitigate the risk and rework for such events, but that may be the best we can do.
What about the Cost of Unforeseen Feature/Requirements Interactions?
Working in microscopic increments may be swell! In theory, I can see that the smaller the change/feature/iteration and the tighter the feedback loop, the smaller the rework associated with any such task/feature/iteration: there is no code that is more adaptable than no code (code that doesn’t exist yet).
But isn’t one of the main purposes of up-front requirements, analysis, and design to be able to anticipate what would otherwise have been unanticipated impacts of certain requirements upon other requirements, and avoid having to rework the architecture because of such last-minute discoveries? By developing features in business-value order rather than impact/risk-driven order, aren’t you basically rolling the dice that you won’t have to scrap something major and rework it all over from scratch?
In theory, the answer to the above question is