Talk Context to Me

[article]

value of practices, but it will at least provide more data to inform those debates.

Context-driven testers believe test process is about tradeoffs. They see test process in terms of problems and possible solutions. If that is true, then there are no “best practices.” Instead, the best we have is guidance that can fail, what we call “heuristic.” Therefore, if you know only one way to do something, or imply that there can be only one good way to test, expect an argument.

Toward a Real Dialogue
In his book The Five Dysfunctions of a Team, Patrick Lencioni details five specific problems. The behavior of context-driven testers is directed at addressing three of these dysfunctions in particular: fear of conflict, which creates artificial harmony and stifles real change; lack of commitment, which leads to people disengaging instead of openly disagreeing; and avoidance of accountability, which leads to people ducking the responsibility to call peers on counterproductive behavior and produces low standards.

Context-driven testers hope to advance the practice of software testing, avoiding dysfunction by fighting about language and ideas and exposing shallow agreement and making their ideas explicit with actual examples. This is what they bring to testing, and what they believe to be valuable. If you can keep that in mind when tempers flare, things might just go a little better.

As a context-driven tester, myself, I invite you to tell me I'm wrong. Have you had an experience working with testers from different schools? Did your experience conflict with my advice? Tell your story. Do you have a different idea about dealing with differences and conflicts in testing culture? Make your case. Or, if you agree or have something to add to my model or my advice, say so.

This article is my part of the conversation. The comments? That’s up to you.

 

User Comments

7 comments
LIsa Crispin's picture
LIsa Crispin

I think the people who label themselves as "context-driven testers" have done the software dev world a whole lot of good. It is good to be pushed to be specific with our terminology, to acknowledge the skills involved in doing a good-enough job of testing, to focus on business value. Not that they're the only ones doing this, but still folks like Matt have had a big influence for the better.

However, I feel that the whole idea of "schools" is divisive and unproductive. Meaningful terminology around testing is one thing, labels are another. And I've felt "judged" by people who label themselves as in this school, and told I don't know anything about testing.

Despite that, I've learned a lot even from the people who pass judgment on me, and am grateful for how their work has helped me improve how I can help my own team and customers.

No "school" knows anything. Most of us need to learn skills from lots of places and people.

July 29, 2013 - 4:02pm
Matthew Heusser's picture
Matthew Heusser

Thanks Lisa. I can agree with you that the schools concept cause division; I just think the benefits outweigh the pain caused. That's /my/ opinion, of course, and you are entitled to yours. I'm happy to talk about it sometime! :-)

July 29, 2013 - 4:21pm
Jason Koelewyn's picture
Jason Koelewyn

Good article, the video is very clever.

I would caution you that as I read it I got the impression you have a thing against Test Automation. I understand what you dislike are the assumptions the term engenders, and I agree that in some situations clarification is needed.

We tend to refer to Automated Regression tests, Automated Service tests etc. to reduce the confusion.

July 29, 2013 - 4:28pm
Teri Charles's picture
Teri Charles

Michael,

I love this article! This really breaks down CDT in a way that as I'm explaining CDT to someone, I can hand them this and then really start the dialogue going. I myself sometimes struggle explaining some of the finer points of CDT and this will help a lot.

And I must say that I especially like the section on test automation. I'm not the biggest test automation expert, but one of my biggest pet peeves is how some people just throw out the term "test automation" when not understanding what, how, when, what, and why. I'm going to print out this section, put it in my wallet, and pull it out whenever anyone says, "Just automate it"! :-)

Again, nice job and thanks for breaking down CDT so well.

Teri

@booksrg8

July 29, 2013 - 4:53pm
Aaron Hodder's picture
Aaron Hodder

@Lisa "The idea of test schools is divisive" It's not the idea of test schools that's divisive, it's the presence of test schools that's divisive. And the 'idea' of test schools came from the observation of the 'presence' of test schools. The division is

July 29, 2013 - 4:59pm
Jesse Alford's picture
Jesse Alford

This article seems to have a confused position on arguments. For instance, this:

> [...] arguing about terms doesn’t advance the practice. My advice: Stick to things that influence practice.

doesn't (necessarily) square with this:

> **Be prepared to ask, and fight, about terms.** Context-driven testers have developed a language like any scientific community, and outsiders can feel stuck. That’s OK. Define what you mean by terms like “regression testing” or “test suite,” and ask your new friend what he means by “heuristic,” “oracle,” or “sapience.” One common tactic is to give up trying to decide who has the correct definition of the word and instead say, “When I use the term test plan, I mean ___.” In many cases, context-driven testers prefer the discussion over terms to a premature “standard.” Be prepared for it.

Arguments about terms advance practice to the extent that they are taken seriously and argued in good faith. (Or even bad-faith good faith, in which one party assumes the other is a reasonable person, and chooses the most reasonable interpretation of the other's argument, even though this is not always the case in reality.) If plans are nothing, but planning is everything, I'd say something similar applies to arguments about terminology: the terms are nothing, but our understanding of them is everything.

Testers could be successful using _flauxbarg_, _Varzy_ and _Kenning_ as terms if they first made sure they had developed deep agreement on their meaning. Actually, that might be an interesting exercise... a planning session in which a broad selection of common testing words (of both the commonly abused variety, such as "regression," and the commonly understood variety, such as "boundary,") were taboo, and had to be replaced with nonsense words with negotiated meanings.

Significant progress in the practice happens when someone goes from understanding "regression" as "fancy word for bug" to understanding it as "something that breaks something that worked before we started." (In case you think I am being unrealistic with this example, I assure you that it is drawn from reality and involved a tester with years of experience.) Of course, there is a converse transformation. A context-driven tester may come to understand that when a certain person says "regression testing" they mean "scripted testing of a new feature" or "replicating bugs reported by users;" this too is an important discovery.

It is possible for a person who attaches contextually-correct meaning to "regression" to communicate better with programmers and product owners, and evaluate the priorities of other people who use the word more accurately. This all impacts the practice of someone asked to "test for regressions" or "regression test."

July 29, 2013 - 6:49pm
Peter Walen's picture
Peter Walen

I've read this and thought on it and read it again.

I find the challenge to be problematic. My concern with terms is that so many people throw them around as if everyone agrees with their definition, without realizing that people do not agree. "Regression" is an awesome example. "Regression Testing" is another.

My hunch, and Matt and I have taken different tacks on "automation" based on the context of our experiences (in public, while teaching a workshop together) is that the problem isn't with the idea behind "automated testing" (another vague and imprecise term) the problem is this: many people, managers and above in particular, have been sold a bill of goods that can never be delivered in their tenure at the company.

They are looking for Magic. They are looking for Harry Potter to flourish a wand and shout some Latin-ish sounding phrase and POOF! The software is tested! Of course, they'll deny that and say, "No we want the tests to run and everything to be green at the end." Sounds pretty much the same though, doesn't it?

If we can't define the terms in the context of the situation we are in, then why use a "common set" of terms at all? In my experience it takes a very open minded person to be willing to consider the possibility that they might be wrong. I'm wrong a lot. The context of our situation will determine what will work for good or ill. I have not found canned responses to be of much value.

Regards -

July 29, 2013 - 11:02pm

About the author

Matt Heusser's picture Matt Heusser

The Managing Consultant at Excelon Development, Matt Heusser is probably best known for his writing. In addition to currently serving as managing editor of Stickyminds.com, Matt was the lead editor for "How To Reduce The Cost Of Software Testing" (Taylor and Francis, 2011). He has served both as a board member for the Association for Software Testing and as a part-time instructor in Information Systems for Calvin College.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Nov 09
Nov 09
Apr 13
May 03