How to Make the Untestable Testable

[article]
Summary:

When you are told by someone that something is not testable, take a deep breath and engage the person in a conversation. The conversation should not be about about why something is not testable—not directly. It should focus on understanding what someone is experiencing, explore different interpretations of information, and help make the untestable testable.

It strikes me odd to hear someone say, “This can’t be tested.” In response, I claim the opposite: Anything can be tested. However, one must be comfortable with the result of testing, including failure, loss of product, loss of money, or bodily harm. When a claim is made with this understanding, might anything be testable?

I recently heard this statement again in a slightly different context, and with the benefit of having more time in a testing role, I have reconsidered what this might mean. If nothing else, my testing roles teach listening skills and patience. When someone approaches you and makes this statement, what else might they be saying about the product under test, or possibly, about themselves?

When you are told something is not testable, take a deep breath and engage the person in a conversation. The conversation is not about why something is not testable—not directly. It is a conversation to understand what someone is experiencing, to explore different interpretations of information, and to help make the untestable testable.

Statement or Conclusion
My consideration begins with a tester’s motivation for making the statement. I want to explore his thinking along those lines. I will not jump to the defense of the product and its testability. Rather, I want to determine if the tester is making a statement or drawing a conclusion.

A statement is more of a conversation starter. It says where we start on the spectrum of testability. “This can’t be tested” probably falls somewhere near the edge of the spectrum. In this case, I want to test this tester’s statement by asking about the information he has about the product and his ideas around what can’t be tested.

Similarly, if he is drawing a conclusion, what information is he using to do so? Perhaps he has attempted to evaluate some function and determining a result has been challenging, so he states, “This can’t be tested.” My intent is to discover and learn about his perceptions.

Let’s consider statements and conclusions through the following scenarios.

Just Frustrated
Once, a tester approached me, very frustrated, and exclaimed, “This can’t be tested!” While my first reaction was to help with testing, I tried to be empathetic first. “Perhaps it is untestable,” I said. I talked about the technical challenge with that particular function or other factors that impact testability. This exploration will ease a person’s frustration, distract him from the task at hand, and easily engage him in conversation.

I started a collaborative investigation into the product under test. Remember, you need to start simple—verify that the product has been deployed. How many times have you started to evaluate something only to discover it was not included in the last build?

Discuss the purpose of the test and how the test plan attempts to learn about behavior and discover flaws. Does the plan align with business expectations for the product?

With this foundation, I talked about his observations and how he drew his conclusions. We gathered information from the application to validate observations (database records, log files, screen shots, etc.). I checked along the way to determine if some level of testability was emerging and continued in that direction. When our combined efforts yielded few results, we invited the developer to our conversation and discussed methods to provide transparency to the product.

A Challenge
Another time, I worked with a developer on a project who, while competent, was not always tester-friendly. This person approached me once and claimed—with a hint of hubris—that his code was not testable. He suggested I not waste my time with it.

If you are in a situation like this, you should begin the conversation with an exploration of what the person did to make the product untestable. Inevitably, questions must be posed to subtly suggest that if the product goes unevaluated or under evaluated and is released for use, the risk of failure is present (the scope of failure depends on the product—I’m sure we can all imagine sufficiently large failures of products) as well as the potential for consequence. My conversation seeks to determine the comfort of this person if such a scenario is allowed to play out.

Alternatively, the conversation might explore the motivation for reducing the testability of the product. While it seems counterintuitive to reduce testability, an exploration might reveal security concerns, time constraints, ego (be careful when exploring this), or other factors outside of the person’s control. Regardless, as a tester you can report the risk in not evaluating all or part of a product.

Conditions of the Test
When I overheard someone claim something could not be tested, my tester’s curiosity might lead me to explore this person’s test conditions through the following questions: Are the conditions such that some system dependency is unavailable, uninstalled, unpowered, or out of service? Might a long-running system process prevent or delay the evaluation? Is there something about today that impacts testability (leap day, end-of-month processing, or lunch time in a different time zone)? Is there an opportunity to perform negative testing?

Differing Opinions on Testing and Testability
During one project, I discussed some development activity with the developer and suggested she deploy the code so I might review it. She replied saying there isn’t anything really to test. In the back of my mind, I was thinking about the development we just discussed and how she worked long hours over the last month. For that amount of time and energy, I asked, “There really isn’t anything to test?”

I raised a concern with my project manager and we continued with our work without the evaluation I suggested. When we tested the code, we found defects and missed requirements. It is a sad and often repeated tale, however, in that when we discussed our successes and opportunities after the fact, I discovered a different point of view.

The developer did both experimenting and prototyping to determine her course of development. Along the way, her successes served to springboard them to further experiments. She considered her work to be scaffolding—code that would support development of more code. She also considered scaffolding as nothing to really test.

The result of this conversation was that when we worked with her on subsequent releases to evaluate scaffolding code earlier. I asked to review the code in order to gain an understanding of how it integrated into the final product. Additionally, the testing team provided feedback on differences in expected behavior and user interface aspects. Note that we did not open defects (because we were looking at an on-going development) but provided observations to changes in the product—an iterative concept familiar to many agile teams.

Deeply Embedded
While on the topic of scaffolding code or any deeply embedded functionality, I sometimes find a function is difficult to test because that function is so deeply embedded. The same might be said for infrequent processes (month-end processing) or long-running processes (large payrolls).

Deeply embedded functionality includes programs that are many layers deep in a hierarchy or even a difficult to reach part on a car engine.

In these cases, the evidence of operation is difficult to generate. There may be evidence that implicitly indicates the operation occurred. If there is never any evidence, I recommend you perform an experiment to determine if the operation is ever used (for example, add log statements to demonstrate the execution).

Environment Differences
The differences between non-production (a test environment) and production environments are often cited as reasons why some functionalities cannot be tested, or worse, go untested. These differences include functional, data, performance, and resource-access limitations. There may be automated functions in production in which the tester must operate manually in a test environment. The production data represents real customers and real scenarios that may or may not exist in test, and the data may contain sensitive information. The hardware in production is procured for high performance and usually clustered. Lastly, applications may require different credentials to connect to databases and accept information from the Internet.

Many of these differences are product risks and project teams must be aware of them. To mitigate these risks, you need to minimize differences by asking yourself the following questions: Can manual functions be automated for a single day? Can production data be mined and scrubbed for use in test environments or can it be mocked? Is it beneficial to build test environments with hardware that matches production environments and matches clustering? Can the resource access in the production environment be evaluated before large deployments to minimize impact on the application?

Please Don’t Test This
In one instance, a developer approached my colleague and said he could test transaction A but not transaction B. Transaction B is still under development.

“Why not?” my eager colleague asked. “Let’s learn about it together!”

The developer cautioned that the code was still experimental.

“Even better,” he replied. “We can help each other make it defect free!”

I commended my colleague. Collaborate and converse—collabversate—to improve your team dynamics and your products.

Too Simple to Test
Many times I have claimed as a developer, and have heard claimed more often as a tester, that a change is too simple to test. To the claimant, the change may appear simple: one line of code or an update to a configuration file. Other simple changes are an addition of a sentence in an error message, correction of syntax in a customer-facing message, or an addition of a phone number.

Simple changes can introduce errors. I’ve seen instances in which the sentence in the error message now contains a spelling error or an updated phone number is out of service. Having experienced many of these, spelling and “simple changes” become among my most evaluated scenarios. Also, while they may look good in the code review, a better test is to check them after they are deployed.

 

The Test is Dangerous
A few times, I’ve evaluated a product that has some inherent danger in its normal operation. Clearly, an evaluation of rocket engines, most military ordinance, or modes of power generation carry some risk to both the tester and the product. While testability is reduced for these types of products, the question to test becomes a discussion with my project manager, and it is a discussion of risk. That is, what are the probable outcomes of this product’s operation if a specific test is not performed? Consider the following statements:

“I can’t test this DELETE function because it deletes most of the database.”

“I can’t test this configuration change because it may decrease the price of our key product by a factor of ten.”

In each case, there is still a discussion around probable outcomes, the value of the test, and methods of isolation. That is, for these types of products, I ask how can we mitigate the danger and still perform a test that provides valid and valuable results.

For the DELETE statement:

  • Can we mock the database?
  • Can we create a similar table in the same database and populate it with data from the real table?
  • Can we intercept the query before it executes (to inspect its syntax)?

For the configuration setting:

  • Should we have limits on settings that are checked in the application?
  • Can automated tests evaluate change in the configuration file, AND in the application?

When you perform these kinds of tests, let your project team know. They can be prepared for weirdness in the environment, make plans for different work, or even participate in a bug hunt.

User Comments

7 comments
Matthew Heusser's picture
Matthew Heusser

"They can be prepared for weirdness in the environment, make plans for different work, or even participate in a bug hunt." --> I think that's the key. A hidden part of /*any*/ test is "... and when I finished it, the system continued to operate as we expected."

If you are going to make changes to the platform, do it while testing, and note new things that break.

October 8, 2013 - 3:10pm
Alessandra Moreira's picture
Alessandra Moreira

Hi Joe,

Great article! I love your approach, more like a councelor than a tester: taking the time to hear and understand the meaning behind phrases and comments. It is a great reminder to stop and look beyond what is being said to what is being communicated.

"Too little to test", is a phrase I've heard that so many times. Changes to one line of code is all it takes to break something, especially when the change is done to integrated systems, with multiple interfaces or legacy code. You never know which 'other lines of code' call that 'one line of code', so there's ever hardly any change that is too little to test!

Matt - "and when I finished it, the system continued to operate as we expected" is a hidden but very important part of what we do for sure.

October 8, 2013 - 3:25pm
Jon Hagar's picture

Nice piece with good examples. One area where I "played" a little was AI program. At one level AI software can be tested with standard "techniques", but these many not provide useful information. For example in testing a Neural Network, you should you were testing coverage of code and functions. But the "learning" function of the net was difficult to test. Was it learning? Was it learning correctly? Was it learning what you wanted or expected. These almost seemed untestable. But I used ideas such as you suggested. It was tested, and we found interesting information about it, but is was more like testing a human than a program. So I wonder if there is something that is really untestable.

October 8, 2013 - 4:07pm
Timothy Western's picture
Timothy Western

One thing that went unmentioned is that sometimes the 'can't test that', is in response to the effects of an action being so far downstream as to be 'hard to test' or outside of a given team's scope.

October 8, 2013 - 4:10pm
Carl Shaulis's picture
Carl Shaulis

Carl Shaulis

Extremely interesting article! I ran into a slight twist on the "not testable" last week, which was "to expensive to test". The developer agreed it could be tested, but the cost was not worth the low risk associated with the test.

October 9, 2013 - 2:18pm
Areli Ibarra's picture
Areli Ibarra

Thank you for this article. I was reading and just could not agree more with the different scenarios your presented. I also would like to add that many times the challenge for my self has been on who owns the testing, validation staging environment. In organizations where either the QC - QA team has control over what gets into their environment the "cannot be tested" is less heard. Where I worked in organizations where either only DBA has access or with hosted environments (for example Oracle on Demand). I hear this constantly.

October 17, 2013 - 8:06am
Mark Jones's picture

In my experience, if something is hard to test the next question asked is what the risk of not testing it is, and then whether the level of risk, usage of the functionality, and the time/cost needed to test it can be justified. Something that is low impact and time-consuming to test could well receive no/little testing if there are a multitude of other features awaiting test which can be worked through far more easily.

July 4, 2014 - 5:42am

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.