I Once Called Exploratory testing a dangerous technique. I’m here today to recant ... somewhat.
I wrote in the first edition of my book Managing the Testing Process, "Testing without written test cases and documented expected results has a lot in common with the Hispanic party tradition of letting blindfolded children pummel a hanging piñata with a stick to get at the candy inside. Would you bet your job and the quality of your company’s product on this methodology?" Strong words.
Since that time, I’ve come to a better understanding of exploratory testing. And, in many ways, I love it. For one thing, it's effective at finding bugs, particularly compared with following detailed manual test scripts to the letter over and over again. Requiring very little paperwork, exploratory testing is also efficient, which is always good when budgets are tight. (And when are they not?) As a side effect, exploratory testing readily adapts to changing conditions, because you don't have to update so many written tests when the product changes.
I’ve also used exploratory testing to find and fill gaps in written tests. Even the best processes for creating written tests have holes, and it's better that we find those gaps before our customers do!
Finally, exploratory testing is fun and creative for the testers. I enjoy sitting down in front of the system under test and guessing where the flaws might be, applying my testing skills, following where the clues lead me, and finding bugs.
And yet ...
I remain concerned about relying solely on exploratory testing to the exclusion of a planned, systematic process of test analysis, design, and implementation occurring simultaneously with the development of the system. Here are my remaining concerns.
Despite these concerns, I now feel that exploratory testing is a best practice for most test projects. In fact, on-the-fly creativity during testing is something that good testers—and I include myself in those ranks—have always done.
I often tell testers, "A test case is a road map that takes you to interesting places in the system under test, and you should stop and look around when you get somewhere interesting." Recently, I've also taken to telling testers to spend about 10 percent of their time doing purely exploratory testing, often at the end of each day, based on what they've learned about the system over the course of the day.
Most testers, whether they are exploring or not, write something down. If I write "test print functionality," is that a guided exploratory test charter or a very ambiguous test case? What if I also jot down some descriptors and specifics? Am I still exploring?
The debate in some circles about exploratory testing has become polarized, but I think what we have is a spectrum that requires a balanced mix of approaches. Smart testers and test managers already think about the factors that influence the necessary test case documentation. Smart testers and test teams succeed with both scripted and exploratory tests, striking a balance between the polarized extremes.