Deception and Self-deception in Software Testing


Untruths about software testing are common. Managers, programmers, and other people on software projects don't always mean to deceive. Quite often, they fool themselves into believing what they want to believe. But sometimes they lie deliberately and even pressure testers to lie. And testers can also practice deceptions and self-deceptions of their own. In this column, Fiona Charles describes four categories of common deceptions and self-deceptions in testing and outlines what testers need to do to address them.

Have you heard any of these lately?

"The testers are finding too many bugs and holding up the project."

"Anyone can test. We just have to give them the right process to follow."

"Our test cases will provide complete system coverage."

Not one of these common statements about testing is true. At least one of them could have been said by a tester.

Delivering and promoting accurate communications about testing is essential to the tester's and test manager's job. We have a responsibility to dispel myths and misconceptions about good testing and what it can and cannot do. We must also be alert to and prepared to address distortions or attempts to spin the message about testing from any source—including ourselves.

Testing deceptions and self-deceptions often arise from excessive optimism—the triumph of hope over experience, or hope over hard data. Sometimes they come from people attempting to find a place to lay blame. Humans can fool themselves into believing all sorts of impossible things, and occasionally they even resort to deliberate lies. Exaggerating or downplaying risk, inflating test coverage, blaming testing for project delays when the product quality is poor, misrepresenting testing status and findings—these are only some of the kinds of deceptions and self-deceptions testers encounter on software projects. Let's look at some more typical examples.

Deceptions Practiced on Testers
"The software is done. It's ready to test."

Every tester has heard this one, only to discover that the meaning of "done" and "test-ready" doesn't reflect what was in the project plan to get done by this date. Somehow, little things like unit tests—sometimes even finishing coding on some modules—have ceased to be requirements. The programmers made the date, so they're "done."

We've all heard plenty of others. Here are some common ones:

"We didn't change anything significant. You don't need to test." [Although nobody did an impact analysis, and we don't actually know what might break.]

"The infrastructure upgrade [that includes the operating system, the database engine, and the compiler version] will be transparent to the applications. You'll only need to do a sanity test."

"You only need three weeks for testing." [Because the code is late and we cut three weeks from the test schedule.]

Are programmers, project managers, and others lying when they say these things? Quite possibly not, but if not, they've certainly fooled themselves into believing what they want to believe.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.