to verify the From and To account numbers. You may have heard about the Fossbakk case, an example of insufficient checking for input and confirmation failure. If exploratory testers had been involved at the time of user story definition, testers might have asked, "What happens if we put in a longer account number than the system expects?" Or, during the modeling stage (which I certainly would expect on an agile project with a financial transaction system comprised of databases), an exploratory tester could ask questions such as how many ways can we make the transaction "fail?" Testers who ask questions like that all the time and explore the answers to those questions before the code is written may help the developers write better code. And, they'll have the basis for some great, nasty tests to see what the system really does. The questions lead to the test design, which leads to the test execution, which provides learning for everyone on the project.
With these kinds of questions, testers can use exploratory approaches as a first cut to defining tests on chunks of code. Or, testers can use exploratory techniques after verifying their automated tests are working and providing information about the product in its current state.
You can see that each question leads to learning or test design. In the same way, each test design leads to more questions or learning. The three pillars--test design, test execution, and learning--reinforce each other. You don't need to differentiate among the activities; which activity a tester performs is not relevant. What is relevant is that the tester performs all of them, and feeds back information to the rest of the project team.
How Does Exploratory Testing Fit at the End of an Agile Project?
The goal on an agile project is to have a release-able product at the end of each iteration. For me, that product includes the testing required for a release-able product. I prefer to do this with test-driven development, and the developers aren't the only ones who should be writing those tests. The entire project team (and especially the testers) needs to explore the product via questioning, test design and execution, and learning.
If a developer is developing features in small chunks, the amount of exploratory test execution needed at the end of the coding for a chunk or for the whole product is significantly decreased. Sure, developers are still going to make mistakes and cause side effects--that's why exploratory testing is helpful. But manual black box exploratory testing without the test design and incorporating the learning is not adequate once the developers implement by feature and not by architectural piece.
Why Does Agile Change How Exploratory Testing Works?
Developers in waterfall projects tend to implement across the architecture. There's a group of developers writing the GUI, some others writing the middleware, still others managing the platform interactions. In this situation, there is no guarantee that the feature will work as designed, because the developers have no idea what side effects they've inserted into the code.
Conscientious developers do test as they develop. They may even mock up stubs to test their "features" inside their architectural layer. But the middleware people don't know the exact details of what the app layer is doing. And the platform people don't know the details of middleware implementation. Remember, developers make tradeoffs every day in the form of small design decisions. They don't know the implications of those decisions, which is why exploratory system-level testing is beneficial on waterfall projects.
In contrast, for many agile projects, developers implement by feature, implementing the entire code