Every action elicits a response, but sometimes that response is not what we expect. These anecdotes from industry experts are good examples of how our best intentions don't always match our plans.
We take action A, expecting result R (which may or may not occur), but in addition, result U occurs—an unintended consequence. Result U can be positive, but more often than not, it is negative. Sometimes, the unintended consequence can be more significant than the result we were trying to achieve. Unintended consequences are not just sometimes occurrences. In fact, the Law of Unintended Consequences states that any purposeful action will produce some unintended consequence.
A classic example is Thomas Austin’s release of twenty-four European rabbits into Australia for hunting in 1859. This single act led to the explosive growth of the rabbit population, now estimated at 600 million. The most serious mammalian pest on the continent, rabbits are responsible for the extinction or major decline of many native species. Annually, descendants of these twenty-four rabbits cause millions of dollars of damage to crops.
I asked a number of industry experts to share their favorite unintended consequences stories with me. Here are a few:
Jonathan Bach wrote, “You decide that session-based test management (SBTM) is a great idea to manage and measure exploratory testing (ET). SBTM makes ET accountable and trackable. It becomes so successful that your stakeholders require you to write formal test cases based on all of the test ideas you created in your ET sessions. Since no one else is available, the task falls to your team. They go from great explorers to bored clerks, filling in templates for the next four weeks, longing for the days when they used their brains creatively.”
Jean Tabaka shared, “Pushing features out too quickly causes a pile up of defects, which then results in reduced ability to push features out. In our attempt to serve too many masters, we create bloated, under-tested software. The unintended consequences are that you produce less and less that is high value, spend more time managing defects than creating new software, and make everyone unhappy.”
Linda Hayes wrote, “I worked on a project that had a bonus program based on reducing the number of defects coming out of development. The previous release had been thrown over to testing before it was ready, and it elongated the test cycle and delayed the release far past the promised deliverydate. The bonus scheme created side negotiations about whether to report a defect and whether it was, in fact, a defect (as opposed to a missed requirement, etc.) and resulted in a ‘black market’ of defects that went unreported. The whole thing backfired because it did not encourage fewer defects, just less reporting, and it turned developers and testers into adversaries.”
John Fodeh wrote, “I remember an incident where a company introduced a bonus program rewarding the testers who found the highest number of defects (the severity of the defects was also taken into account using a sophisticated weighting algorithm). The program was introduced with the best intentions—promoting testing and finding as many prerelease defects as possible. The unintended consequence was that testers stopped putting much effort into defect-prevention activities, such as requirement validation, design reviews, etc. The silliness of this program was manifested when one tester stumbled across a defect regarding an unsorted list. He reported multiple defects—first list item is wrong, second list item is wrong, etc.”
Distinguished sociologist Robert K. Merton  described four possible causes of unanticipated consequences: ignorance (we can’t anticipate everything), error (incorrect analysis), immediate interests (that override long-term interests), and basic values (that proscribe other, more beneficial actions).
While I appreciate Merton’s list of causes, I think the solution is much more basic—we become so enamored with the benefits of our ideas that