I buy new cars infrequently, typically every 10 to 12 years. So when I bought a new car in 2003 I was surprised at the many advances in technology since I’d purchased my previous car, a 1993 Honda. One advance I was particularly pleased with was a sensor that automatically detects low air pressure in my tires. It is sometimes hard to tell by looking at a tire if its pressure is low, and checking tires manually is a dirty job, so I did it infrequently. A continuous test of tire pressure was, I thought, a tremendous invention.
During the same period in which car manufacturers invented ways to test tire pressure continuously, software development teams learned that testing their products continuously was also a good idea. In the early days, back when we wrote programs by rubbing sticks together, we thought of testing as something we did at the end. It wasn’t quite an afterthought, but testing was intended to verify that no bugs had been introduced during the prior steps in the development process. It was kind of like making sure the oven is off, the windows are closed, and the front door is locked before heading out for a vacation. Of course, after we saw all the things that had gone wrong during the prior steps of the development process (how could they not?) testing came to be viewed not as a verification step but as a way of adding quality to a product.
It wasn’t long before some teams realized that testing quality at the end was both inefficient and insufficient. Such teams typically shifted toward iterative development. In doing so, they split the lengthy, end-of-project test phase into multiple smaller test phases, each of which followed a phase of analysis-design-code. This was an improvement, but it wasn’t enough. And so with Scrum we go even further.
Scrum teams make testing a central practice and part of the development process rather than something that happens after the developers are “done.” Rather than trying to test quality after a product has been built, we build quality into the process and product as it is being developed.
Why Testing at the End Doesn’t Work
There are many reasons why the traditional approach of deferring testing until the end does not work:
It is hard to improve the quality of an existing product. It has always seemed to me that it is easy to lower the quality of a product but that it is difficult and time consuming to improve it. Think about a time in your past when you were working on an application that had already shipped. Let’s say you were asked to add a new set of features while simultaneously improving the existing application’s quality. Despite lots of good work on your part, it is likely that months or even a year or more passed before quality improved enough that users could notice. Yet this is exactly what we try to do when we test quality into a product at the end.
Mistakes continue unnoticed. Only after something is tested do we know that it really works. Until then you may be making the same mistake over and over again without realizing it. Let me give you an example. Geoff led the development of a website that was getting far more traffic than originally planned. He had an idea that he thought would improve the performance of every page on the site, so he implemented the change. This involved him writing some new Java code in one place and then going into the code for each page and adding one line to take advantage of the new, performance-improving code. It was tedious and time consuming. Geoff spent nearly an entire two-week sprint on these changes. After all that, Geoff tested and found that the performance gains were negligible. Geoff ’s mistake was in not testing the theoretical performance gains on the first few pages he modified. Testing along the way avoids unpleasant surprises like this at the end.
The state of the project is difficult to gauge. Suppose I ask you to estimate two things for me: first, a handful of new features;