Noel: You've mentioned that testers on agile teams may struggle to keep up with the pace of development. Could you go a little more into how that happens, and what testers can do to attempt to introduce ATDD as an alternative?
Nate: Testers on agile teams can struggle to keep up if they have a waterfall mentality that tests are verifications that happen after development is done. This sets up a dangerous and unnecessary cycle, because testing tasks get compressed at the end of the iteration. Eventually, the testing starts creating “back pressure” on how much the team can complete in an iteration, which is what causes painful reactions like working overtime to fix the issues we find near the end or even testing in the next iteration. I call this “mini-waterfall testing.”
The problem of late feedback from tests was always present in waterfall, but it didn't really hurt until late in a project. With agile, you get all that pain every two weeks! Rather than Band-Aid the issue, we have to let that pain guide us to a systematic solution. If late feedback from testing is the problem, why don't we test earlier? If earlier is better, what's stopping us from testing first?
We have an opportunity to stop treating tests as verifications, because at best all that does is test the bugs out. We can shift to treating tests as specifications. To do that, we need to work together as a whole team to specify the behavior of new features in the language of the business by using concrete examples. Now we have a shared definition of done for a new feature before we develop it. With ATDD tools like FitNesse and Cucumber, we can even make these plain-language examples into executable specifications without changing them into a format that only a programmer can understand.
Noel: Have you witnessed any initial resistance from testers or developers to the "test first, code later" mantra—and what's the reason behind it?
Nate: I think the most common cause of resistance to test-first is the myth that test-first means we write all the tests up front. I think that's a really bad idea. In fact, it's a prescription for rework and frustration. By contrast, ATDD is incremental. It's enough to start with just a few key examples of the new feature, pick one, and elaborate a handful of tests. Then we develop just enough new code to make those tests pass without breaking anything that's already passing.
These new passing tests give us confidence that we're really done with part of the work, like a “save point” in a video game. We go back to the key examples and pick the next behavior, elaborate with tests, and implement enough to make those tests pass. When we're all done we might have, say, twenty passing tests for this new feature, but we did not write them all up front. We actually collaborated throughout the iteration to specify these tests and make them pass incrementally, and along the way we avoided a lot of rework and guessing. I love the positive tension of specifying “just enough,” because it makes everyone collaborate daily instead of wasting time at a big long meeting.
Noel: For those who attend your session at STARCANADA, what do you hope they're able to take home to their own teams and projects?