In software testing we have different schools, each with their own terms and definitions. James Bach belongs to my own school, the context-driven school of software testing. James created this mashup video, which may give you a taste for what we view as excellence in action.
The hero in the movie is clearly a hero. Notice that he is not talking about prevention or getting things right up front. Instead, he wants to solve the problem of the day right now. You may have noticed the swagger, the tough questions, and quick decisions. The context-driven tester may use jargon you are not familiar with, or have what appears to be an allergic reaction to specific terms like “best practice” or “test automation.” He might laugh out loud at metrics programs. To some people, that behavior can be off putting, even offensive.
In this article, I’ll be trying to reconcile the two positions—to help someone who is unfamiliar with context-driven testing get past what might look like a gruff exterior to the beating heart that shares a desire to pursue test excellence.
A Few Things to Consider When Talking to a Context-Driven Tester
Arguments can be a good thing. The only way to improve my position is to change it, and I will only change it when faced with ideas that are different. That means if you find yourself in an argument with a context-driven tester, he is treating you like an adult, with the hope that each person can learn something from the other. To put it differently: Your critics are your best friends.
Context-driven testers will reject imprecise terms. Take the term “test automation,” for example. The term implies that the entire work of the tester can be scripted up front, repeated, and is best done by a computer. Yet, test automation tools, specifically GUI tools, automate only a small percentage of the actual work of testing. They don’t come up with their own designs; they don’t diagnose defects, file them, explain them, or resolve them. The work the tools do, the test execution, has no feedback or intuition. That means that “test automation,” like “best practice,” is making a promise it cannot keep. You could argue about these terms (arguments can be a good thing), but arguing about terms doesn’t advance the practice. My advice: Stick to things that influence practice.
Be prepared to ask, and fight, about terms. Context-driven testers have developed a language like any scientific community, and outsiders can feel stuck. That’s OK. Define what you mean by terms like “regression testing” or “test suite,” and ask your new friend what he means by “heuristic,” “oracle,” or “sapience.” One common tactic is to give up trying to decide who has the correct definition of the word and instead say, “When I use the term test plan, I mean ___.” In many cases, context-driven testers prefer the discussion over terms to a premature “standard.” Be prepared for it.
Focus on skill. The idea that testing work can be done better or worse, that it is a skill that can be practiced and taught, is central to context-driven testing. This means that when detailed instructions fail, context-driven testers don’t try to write them at the next level down; instead, we engage people in the work, asking them to help define it.
Ask for an example. One way to build common ground with a context-driven tester is to talk about experiences and examples or, better yet, do some actual testing. This won't save you from the necessity of debating the meanings of words and the value of practices, but it will at least provide more data to inform those debates.
Context-driven testers believe test process is about tradeoffs. They see test process in terms of problems and possible solutions. If that is true, then there are no “best practices.” Instead, the best we have is guidance that can fail, what we call “heuristic.” Therefore, if you know only one way to do something, or imply that there can be only one good way to test, expect an argument.
Toward a Real Dialogue
In his book The Five Dysfunctions of a Team, Patrick Lencioni details five specific problems. The behavior of context-driven testers is directed at addressing three of these dysfunctions in particular: fear of conflict, which creates artificial harmony and stifles real change; lack of commitment, which leads to people disengaging instead of openly disagreeing; and avoidance of accountability, which leads to people ducking the responsibility to call peers on counterproductive behavior and produces low standards.
Context-driven testers hope to advance the practice of software testing, avoiding dysfunction by fighting about language and ideas and exposing shallow agreement and making their ideas explicit with actual examples. This is what they bring to testing, and what they believe to be valuable. If you can keep that in mind when tempers flare, things might just go a little better.
As a context-driven tester, myself, I invite you to tell me I'm wrong. Have you had an experience working with testers from different schools? Did your experience conflict with my advice? Tell your story. Do you have a different idea about dealing with differences and conflicts in testing culture? Make your case. Or, if you agree or have something to add to my model or my advice, say so.
This article is my part of the conversation. The comments? That’s up to you.