After reading the book The Day the Phones Stopped, which was published in 1991, Lee began wondering why the poor software quality and complaints about development and testing documented in this book are the same complaints we hear today.
I’d like to be a voracious reader, but I rarely have uninterrupted time to sit down with a stack of good books. Luckily, every so often, I take very long airplane rides and have a chance to read. Recently, I finished The Day the Phones Stopped by Leonard Lee. It’s a layman’s introduction to the world of software and its disasters, and an interesting read.
I was familiar with many of the software failures recounted in the book: the AT&T nationwide telephone switch failure, the Therac-25 radiation therapy system that emitted lethal doses of radiation, computerized baggage-handling systems that misdirected luggage, and the ineffectual performance of the Patriot missile during Operation Desert Storm.
Other failures, however, were new to me: faulty blood bank software that erased donor records, allowing AIDS-infected blood to be used in transfusions; the crash of the Saab-Scania Gripen jet fighter that didn’t respond to pilot commands; 747-400 airplane engine throttles that mysteriously shut off by themselves in mid-flight; the B-1B bomber with subsystems that interfered with each other’s operation; and the sinking of the HMS Sheffield during the Falklands War because the captain made a phone call using the same frequency as an incoming enemy Exocet missile, preventing its detection.
The book is chock-full of statements familiar to us: “[Computer programs] are more complex … than perhaps any other human construct” (Fred Brooks); “It’s impossible to test for one hundred percent of all possible conditions. There are so many conditions that you just can’t test for every conceivable situation” (Karl Blesh, AT&T); “As you approach the ‘drop-dead’ date when a program’s supposed to be finished, they often short-cut the testing” (Jim Wilbern, KPMG Peat Marwick); “Unrealistic testing and unanticipated situations are a common theme in the world of software problems” (Leonard Lee); “There’s no way you can guarantee a computer system’s going to do the right thing under all possible circumstances. No matter how carefully you’ve designed it, it’s probably not good enough” (Peter Neumann, SRI International). My favorite is “Complete review of software testing is often impractical” (Robert Britain, NEMA). According to Britain, not only is complete testing impossible, even reviewing the testing that was done but is not practical.
For me, the most interesting thing about this book is that it was published in 1991—eighteen years ago. The examples of poor software quality and complaints about development and testing documented in this book are the same complaints we hear today. It seems that two decades have passed and nothing substantive has changed.
What can bring about change? It is my experience that people and organizations change for three reasons. First, they are required to. Second, they are in so much pain that even change becomes more attractive than the status quo. Third, people develop a vision of a better future that inspires and motivates.
In the first case, “required to,” some external entity will “beat us with a stick” if we don’t change. Examples are “Become CMMI Level 3 by this date or lose the contract” or “Become a certified developer or tester or risk not being employable.” External forces can be effective in fostering change. (Typically, these forces have unintended consequences, neither envisioned nor understood, but that’s for another column.) But there seem to be few effective corrective forces in our industry: Projects fail to deliver, yet, we begin new projects; processes that produce failed projects are used for the next project; people who produced failed projects are employed on the next project. Einstein once said, “Problems cannot be solved by the same level of thinking that created them,” but we continue, stuck
|Oh, When Will They Ever Learn?||195.31 KB|