the story is fully complete and ready to deploy. The questions here promote interactions with the entire team.
|Can we try it together before we check it in?||This question realizes the benefits of focused exploratory testing before code is checked in. By pairing with developers, testers can share and promote good test practices. Insights gained through exploratory testing can also lead to better automated story and unit tests.|
|Does the happy path work"||While all paths are important, it is counter-productive to focus on alternate or sad paths before the primary happy path is working. Agile testers answer this question before testing any further.|
|Is this a problem?||A tester will often highlight an anomaly to the team in the form of a question. This helps avoid a developer-tester blame game and opens the door to an open dialog on the story. The result of this question should be a shared understanding of whether the anomaly is a new bug, a known bug or perhaps intended functionality.|
|Can we fix this now?||It is usually in the best interests of the team to fix newly found defects as soon as possible. The result of this question should be either:
· an immediate pairing session with the developer to fix the defect OR
· a defect card if it can't be fixed right away but can be within the sprint OR
· a defect story being added to the product backlog
|Is this story done?||A key shift for testers in this quadrant is moving away from the question "Is this potentially shippable?" towards "Did we meet the intent of this story?" The shippable questions are asked and answered in the right quadrants.|
Did we build the right thing?
Exploratory testing focuses on asking questions that trigger discovery. Context driven testing principles and session-based test methods provide an effective framework for asking exploratory questions and sharing the discoveries. I have found little value in scripted manual tests because automated unit tests and story tests provide a more effective regression suite. More importantly, session-based exploratory testing is more effective at finding defects. Outputs from exploratory sessions provide the team with a deeper awareness of the system strengths and weaknesses.
The table below is a small sample of questions that could be asked in exploratory test sessions. For a starting point on a more complete list of testing heuristics talk check out the Test Heuristics Cheat Sheet by Elisabeth Hendrickson, James Lyndsay and Dale Emery. Better yet, talk to a skilled exploratory tester.
|How might a different user use the system?||Ask yourself how a user other than the primary user for the story might use the feature. Elisabeth Hendrickson suggests adopting "extreme personae" to stress system usage. Jean McAuliffe provides some good persona suggestions in her Lean-Agile Testing Seminar although I really like the Bugs Bunny and Charlie Chaplin personae suggested by Brian Marick.|
|How could the system be used in different scenarios?||Try using the system in different non-obvious scenarios. Some scenarios may simply be different states such low CPU, memory, disk space or starting after a network or power outage. You may also want to consider more extreme "soap opera" scenarios first suggested by Hans Buwalda. Michael Bolton suggests a possible soap opera example that will give you a flavor for the line of thinking.|
|How does the system behave at boundary conditions?||Test the system at values below, at or just above boundary conditions. For example, does the system work when I paste 255, 256 or 257 characters into a text box?|
Did we build it right?