When to Say No to Exploratory Testing

[article]
Summary:
Mukesh Sharma writes that there are some situations in which exploratory testing does not work. Understanding these limitations is important in devising a holistic test strategy for the team.

Exploratory testing is a popular testing technique used to enhance a product’s test coverage by helping the team better understand the system and find issues when time is of the essence. Additionally, creative exploratory testing helps break the monotony of running regular scripted test cases and empowers testers to learn simultaneously as the tests are being run.

Despite these benefits, there are some situations in which exploratory testing does not work. Understanding these limitations is important in devising a holistic test strategy for the team.

What Is Exploratory Testing?
Exploratory testing is any testing where the tester is learning and designing tests on the fly. The tester then applies that learning to execute further tests and help improve the quality of the testing effort as well as the product’s coverage. Per this definition, exploratory testing can be done with scripted tests, but the learning element is what is most important. In one instance of exploratory testing, the tester is not guided by existing scripts and he uses his skills to explore the product and learn as he tests; he is not constrained by a pre-defined scope. This thought process aligns more with commonly used definitions of exploratory testing, including a very famous one by James Bach in his article “What is Exploratory Testing?” This article also touches upon the idea of exploratory versus scripted testing, which is what is of utmost importance for us here because it helps to decide when exploratory testing will be of value.

So, when do you really say no to exploratory testing?

Say No When Exploratory Tests Replace Scripted Tests
Given the value and return on investment (ROI) of exploratory testing several startups that often work within budget constraints and cannot allocate the required funds to software testing, resort to using this as their main testing technique to determine the readiness of the product to release. While this is certainly a good start and a better solution than not testing the product at all, these startups need to understand that this is not a perennial solution and exploratory testing cannot replace scripted testing. It is important to balance the two testing techniques so the right coverage is obtained within the cost and time constraints. Startups need to say no to exploratory testing—or, rather, say no to exploratory testing alone—once they have reached a few initial milestones. It’s important for all organizations to document and define the test coverage obtained through the overall execution effort in order to convince the product team and other stakeholders that the product is ready to ship. Organizations should have predefined measures and associated metrics that help confirm a product’s conformance with the exit criteria. This helps establish a larger knowledge base about the product and its quality rather than depending on individual testers for the coverage achieved through exploratory testing.

Say No While Dealing with Compliance Testing
In areas where scripted testing is of utmost significance and deviance from it is not acceptable, you must make the tough decision and say no to exploratory testing. This is especially the case with compliance-based testing, in which you must adhere to checklists, government mandates, and specific domain-based testing where obtaining legal certifications is a necessity. For example, in the field of accessibility testing, where Section 508 and other laws such as Web Content Accessibility Guidelines 1.0 and 2.0 are in place in the U.S., the tester has to go with a predefined checklist of tests and even look at using templates such as the Voluntary Product Accessibility Template that make it easier to test the product’s compliance with defined standards.

In fact,

User Comments

8 comments
John McConda's picture
John McConda

I don't think the terms "exploratory" and "rigorous" are mutually exclusive. I've seen exploratory testing be very successful in regulated environments, including Pharma and Section 508 compliance.

In my experience, what matters most in these contexts is your evidence. I can get thorough, objective evidence from an exploratory approach just as easily as a scripted one. In fact, I find it easier to document my testing with an ET approach, because I'm not confined to filling in an expected result and matching it with an actual result.

August 13, 2013 - 9:47am
Oliver Erlewein's picture
Oliver Erlewein

Hello Mukesh,

I like the idea of the article. We should actually be looking if there are spaces, where ET is lacking.

1) What is scripted testing in your world? It's a bit hard to tell from the article.

2) You make the assumption, that ET does not show coverage. What makes you say that? There are several methodologies to actually show coverage in ET. Some are actually the same or similar what you'd be doing in scripted testing too.

3) You still maintain the illusion that scripts have value. That they coverage or deliver better quality. There is no tangible proof that is the case. I'd stand to argue just the opposite. Yes, fancy scripts can blind stakeholders better but that has nothing to do with the quality of testing. I think proper conveyance of the product status does that. I am convinced though that scripted testing is a far easier way to earn a wage/make $$$$ than exploratory testing. It also makes it easier to hide inexperienced testers.

4) You say that ET depends on good testers. So does scripting (if done correctly). So what's your point there?

5) You talk about exit criteria. What exactly are those? Are the ones you have really of value? Most can be closed as arbitrary. I think they detract from the actual goals testing should have. Most Exit Criteria focus too much on checks.

I'll give an example. A common criteria is: "There may be no Showstopper and high defects, 10 medium defects and 30 low". The definition of the severity of a defect is in the eye of the beholder. Do the numbers get adjusted every time the scope changes? What if testing there is pressure to go-live? Does the criteria hold up (i.e. can the lack of quality prevent the go-live)? And many more.

6) As for standards you can argue why such known entities have to be done by testers. Why are they not integrated into development(practices)? They are just as capable to tick some boxes and run some automated style checkers. Testers should be in the position to check the check at that point (maybe do some exploratory spot-checks, where it counts). Especially since these things are very fundamental to a solution and if found late (by the time a tester gets to it) in the process could have severe consequences.

In general I think you can expand a bit on the full scope that ET can deliver. You hold on to the safety blanket that scripts (mistakenly) being. I'd suggest you have a real and objective look at the scripts you have and validate that they actually give you the value you are expecting of them.

What I do agree with is, that ET is often bastardized. Like Agile, many claim to do it but actually don't. The start-ups you are using as an example probably are in dire straits. Not because ET isn't sufficient but because they have started with some of the ET process and have missed to improve the process and implement further methodologies. They have missed the boat on delivering continuous value to the people that matter. Maybe there is also a lack of understanding on the stakeholder side of what testing is and the value it brings. So the guidance/governance is setting the wrong goals.

Cheers

Oliver Erlewein

August 13, 2013 - 5:50pm
Brian Osman's picture
Brian Osman

While I appreciate that you've taken time to write this article, in my opinion your argument appears flawed on many levels.

You say "Exploratory testing is a popular testing technique" when I would contend that exploratory testing is an approach as opposed to a technique that is pulled out of some mystical bag of tools. That's ISTQB misinformation right there (refer ISTQB syllabus). I'm sure you would've been aware of that especially as you reference James Bach's article "What is exploratory testing?"

You talk of compliance testing particularly s.508 and accessibility guidelines and in the same breath talk of checklists. A checklist is not a script but can be a guideline and prompt to help us remember different things. According to your article, exploratory testing (ET) the technique may not be used for testing accessibility as you substitute ET with pre-defined scripts as that is *mandated* (though i would doubt that validity of your claim as already covered by the comment from John. Have you not seen http://www.satisfice.com/blog/archives/602 by James Bach or http://testingcircus.com/interview/interview-with-griffin-jones - Griffin Jones who both employ/have employed ET in regulated environments that according to your article, should not happen). ET the approach would learn from the checklist and provide appropriate feedback. ET is complimentary with pre-defined test scripts and can also stand on its on. It is misunderstanding and ignorance that confine an effective approach like ET to the commodity bin of reusable tools. That's what a certificationist would do, not a skilled software tester that understands context and ET.

I wonder what research you have done to validate your claims? Who have you spoken to within the community that supports your research? It appears very little but i am curious to know more.

It seems to me that what you may be saying is "When to Say No to Good Testing" - I for one would be interested in your response.

*****

Sorry i could not find where to post to your reply so my counter reply is here...

It is great that ET is something you consider and value in your business. Thank you. The issue for me appears to be the shallow

representation of ET in this article and what ET means to you.

So, with regards to whether its a technique or an approach I prefer the description of Cem Kaner and James Bach who coined and developed further what *we* call exploratory testing which is a mindset or a way of thinking about testing rather than a methodology (Cem Kaner, James Bach, Exploratory & Risk Based Testing, www.testingeducation.org, p. 10).

I find it interesting that you say "saying for mandates,standalone use of ET will not work..." and yet provide no proof that this is so. On the other hand, have a look at this post by Griffin Jones who practices ET in an regulated environment - http://www.griffin0jones.com/2011/11/surviving-fda-audit-heuristics-for.... and also James Bach who blogs about the FDA recognise ET in testing medical devices (http://www.satisfice.com/blog/archives/602) or an

interesting post from James Christie who has been involved in computer audits (http://clarotesting.com/page20.htm).

Thank you for your response and look forward to discussing further your claims that ET cannot be relied on in *mandated* situations (for me it suggests that we may have different understanding of what constitutes good ET).

Cheers

Brian

August 13, 2013 - 6:05pm
Rajini Padmanaban's picture
Rajini Padmanaban

Brian, John and Oliver: Thank you all for taking the time to read through the article and share your feedback. I am sharing these responses on behalf of Mukesh and here are our thoughts:

@Brian: Whether referred to as a technique or an approach, the bottom line is we look at ET as a part of the testing methodology. In fact, technique is a better choice of word here as it refers to a skill and a practice too. If you see we are drawing a difference between mandates and checklists saying for mandates, standalone use of ET will not work. The point to note is that not all checklists are mandates but mandates are most often guided by checklists that can be very useful for testers. E.g. VPAT checklist. In such cases just ET cannot be relied on. We are making subtle points throughout the article to drive home the thought process ET is very valuable but there are scenarios where ET or rather ET alone cannot be relied on and I think you are missing that point here. We are not discounting the value of ET. For e.g. we are not saying ET may not be used for Accessibility Testing. This is why we specifically say for a user driven approach in AT, ET is very valuable. We have been in the STQE business for the last 10+ years now and have been testing for a range of clients (all the way from start-ups to Fortune clients). We have been leveraging ET for several years now and our thoughts shared in this post are adequately substantiated with the hands on work we have been doing in this space using both scripted and ET tests

@John – We appreciate your crisp, to the point feedback. As mentioned in the previous response, the above example in S508 testing you can still leverage ET as a supplemental technique to bring in the end user experience in accessibility, but you can rely just on ET. In general with ET, specific testers may be very skilled in using ET and capable of clearly documenting their ET efforts, and find it to be very effective when they are not confined to templates. But as a test manager/director who is planning a team’s test efforts, it becomes important to think of the larger team’s executional strategy when you work with testers of varying skills, teams with cross group dependencies etc. In such cases, especially when you are testing for compliances, the effectiveness of ET comes down when used as a standalone technique.

@Oliver – 1. By Scripted, we mean a documented test that a tester uses, on what needs to be done – either manually or in an automated manner

2. Our assumption here is not because of the lack of tools, but the lack of using them. In most cases in our experience, the tester in an exploratory testing activity is in a free flow testing space (especially what we have seen in start-ups), just focusing on the test execution effort and the learnings arising from them, rather than using the information to understand the larger coverage it provides. Rather when they adopt scripted testing techniques, using code coverage tools, establishing requirements traceability happen more formally.

3. Your point here almost sounds like you are discounting the value of scripted testing. If you are solely going to rely on ET (leave alone the question on test coverage), how are you going to reference back the testing effort you undertook on the product – whether it is you who needs to refer back sometime later, a new tester who wants to understand what happened on the project, a management person who wants to understand the test effort, whatever the case may be. I am surprised you say that scripted tests makes it easier to hide inexperienced testers. In fact a scripted test in where an inexperienced tester can be easily spotted based on the test case design effort, the kind of coverage he/she is planning for the product and whether his testing thought process has the required breadth and depth.

4. Where are we saying that ET depends on good testers and scripted testing does not need good testers?

5. We do not deny that test entry/exit criteria can be very subjective. That’s a completely different topic for discussion. It is up-to the test team to define objective exit criteria in discussion with the product team and its end user requirements. The point here is that when you rely purely on ET it becomes so much more difficult to track progress against criteria such as requirements traceability, test effort progress, overall test coverage which are important in determining the team’s readiness to sign off on a release

6. I disagree that standards are mere ticking boxes that can be integrated into dev practices. Some can be done at the unit test level, while most of them, especially in areas such as accessibility testing, the tester needs to run tests on a reasonably ready E2E version of the product. Some in fact need to be tested by people with special needs such as visually challenged testers.

We largely agree with you that even in ET, you need constant improvisation of processes to ensure the technique is effective and gives you the required ROI. Where we disagree is you discounting the value of scripted tests. In our experience drawing the right balance between scripted tests and ET is what is the most effective approach and what is also the biggest challenge for the test team. This is where a seasoned test manager comes into the mix to leverage his/her experience and guide the team.

@ Stephen – Thanks for your comment. No we are not referring to ad hoc testing in this article. We are clearly differentiating ad hoc and exploratory testing. Also, if you read the article carefully we are not discounting exploratory testing – we totally see the value of it, and appreciate the efforts of a tester who works the exploratory route. We are mainly focusing on areas where exploratory testing alone will not work. In the context of legislative requirements, if you have built a body of evidence, that has in fact become your guide in your testing effort which seems to me that the next time around when you are working on a similar requirement you are relying on that body of evidence which has now become your script/guide that you are basing your effort on.

@Tom - Rightly said. Thanks for your response, Tom.

August 14, 2013 - 5:21am
Stephen Hill's picture
Stephen Hill

Hi,

Exploratory testing is an approach rather than a technique. You apply test techniques to exploratory testing.

I have been testing for legislative requirements for many years and have built up a considerable body of evidence using the exploratory approach. I would argue that I have been far more rigorous in my testing than I could possibly have been using a scripted approach because a script gives me a false sense of what I have *actually* done. When the context of my testing involves regulatory compliance I acknowledge that this is part of my context and I document what I am doing, how I am doing it and what I am seeing as part of my test reporting.

The article seems to be suggesting that ET is the equivalent of 'playing' with the software in an ad-hoc manner. This concerns me because in my book that is just plain bad testing. Even applying something like gallumphing - one of the techniques you can use early on in a test cycle - has a structure to it!

Good exploratory testing is hard work; it taxes your brain and it requires a lot of skill. Please don't discount it and say 'no' to it!

Regards,

Stephen Hill

http://pedantictester.wordpress.com

August 14, 2013 - 9:57am
Tom Grega's picture
Tom Grega

As I understand the author's thesis - exploratory testing, while initially done “on the fly” (I would suggest a open ended approach) takes results of the initial findings to build targeted and empirically based plan. At the risk of creating another flame war – this is similar to an agile approach in iteration. Assuming you front any testing, regardless of approach or methodology with the question – what problem are we trying to solve (building a test plan, the user experience, platform for program solution). Have a clear understanding of the goal or problem you are attempting to solve is what brings merit to any approach or methodology.

I do not interpret the article to say yes or no (although the headers indicate otherwise). As a example the author points out that while one must use scripted testing for accessibility review as standards are legislated or prescribed – there is value in a user driven approach or exploratory method if you will as it provides broader coverage. The point is not to rely solely on one methodology versus the other.

What the author is arguing for is balance. Testing, engineering, analysis, raising children for that matter requires a tool-kit. There are different tools for different problems. They key is understanding what problem you are trying to solve, and the wisdom to know each tool’s strength’s and weaknesses. A good senior manager will spend the time explaining the merits and defects to their team, based on real world experience, and this is what the author is attempting to do here.

Tom

August 14, 2013 - 12:39pm
Mukesh Sharma's picture
Mukesh Sharma

Thank you all for taking the time to read through the article and share your comments. Rajini has clearly provided detailed responses to your individual points. I want to reiterate that at an uber level, exploratory testing is very powerful; it exists due to its powerful reach and effectiveness when used correctly. The problem statement this article addresses is situations when exploratory testing is used incorrectly and how to avoid them so as to benefit from a combination of scripted as well as exploratory testing techniques.

@Chris: I agree with your spectrum definition. In fact that is why even at the very start I talk about how some views of exploratory testing talk about use of scripted tests. I am saying no to the full form of exploratory testing (per your spectrum view here), in this article in specific scenarios where one needs to traverse inward in the spectrum to using a combination of both scripted and exploratory testing to establish the right balance. Hopefully this also answers your other question where I am not referring to scripted and exploratory tests being mutually exclusive.

August 19, 2013 - 6:44am
Chris Kenst's picture
Chris Kenst

Rajini / Mukesh,

Can you explain how you see ET (exploratory testing) and scripted testing being related? I believe you omitted this important distinction. My impression based on the way you refer to ET in this article and in the comments is you believe they are mutually exclusive. Perhaps you are being too general in your use of the term exploratory testing?

In fact ET and scripted testing are not mutually exclusive - they sit on a spectrum with pure scripted on one end and fully exploratory on the other as depicted here: http://www.huibschoots.nl/wordpress/wp-content/uploads/2013/01/ETscale.png.

So when you say to exclude (say no) to exploratory testing are you referring to full / freestyle exploratory testing? Or are you referring to any and every form (the full spectrum) of exploratory testing – including those that use checklists?

August 21, 2013 - 5:58pm

About the author

Mukesh Sharma's picture Mukesh Sharma

Mukesh Sharma created QA InfoTech with a vision to provide unbiased Quality Assurance (QA) solutions for business partners worldwide. As CEO, he is responsible for the company’s global operations, marketing, sales, and development efforts. Under Mukesh's leadership, QA InfoTech has grown to five centers of excellence, 600 employees, and over $15 million in revenue.

Mukesh’s career spans DCM Data Systems, Quark Inc., Gale Group, and Adobe Systems in software quality engineering and test management roles. Mukesh is an active test evangelist.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Sep 22
Sep 24
Oct 12
Nov 09