Dealing with a Test Manager’s Most Annoying Problems

[article]
Summary:
A test manager has to perform in multiple dimensions, using a variety of professional and interpersonal skills daily. With all these career facets, there are lots of different areas that can pose a problem. Here are the most common (and most annoying) things a test manager typically hears on a regular basis, as well as some strategies for how to deal with them.

A test manager has to perform in multiple dimensions, using different professional and interpersonal skills daily. They have to give accurate test estimations; be fully aware of the functions and requirements under development; define the scope of testing; apply appropriate testing metrics; plan, deploy, and manage the testing efforts; and even motivate and encourage testers, even if the manager isn’t their team lead.

With all these career facets, there are lots of different areas that can pose a problem. Here are the most common (and most annoying) things I hear as a test manager, as well as the strategies I’ve developed for how to deal with them.

“Just test the main functions”

The problem: After changing a small but important piece of code, the testers are asked to test “just the main functions, just to be on the safe side.” This request is usually connected to understandable time and effort restrictions, but it’s annoying nonetheless.

The solution: Analyzing the changes with the developers responsible for the function usually helps. I use white-box techniques and build the test suite based on these results with the existing regression test cases. A good Pareto analysis that identifies which 20 percent of the application is used 80 percent of the time also can help.

Specify a limit on testing if necessary, agreeing with the project stakeholders on its scope and its possible consequences. I often use risk assessment matrices for this, and there are pretty good templates available online.

“Move it from legacy to new”

The problem: Every company has a good old module or function running in an outdated framework that is not supported any more. It has to be rewritten in a new (probably fancy) framework, but it should work as before.

The solution: Start with a deep and thorough mapping of the given function, talking with the people who originally developed it, if possible. I’ve always tried working closely with the developers on the implementation itself. This rewrite is also a good opportunity to leverage some redesign, deprecating unused data tables or UI elements.

“It was working on local”

The problem: After finding a serious, probably blocking bug, you get this excuse from the developers: “But it was working on local.”

The solution: Write a very precise description of the bug, then let the devs deal with it. Don’t fall into the trap of codependency. For the long term, try to emphasize the need for running unit tests before giving the item to be tested to the QA department.

“Have the new test team ready in a month”

The problem: Although you’re out of experienced testers, management is requiring you to fill up the test team quickly for a new project, and they want everybody knowing everything about the product (although there’s no training available).

The solution: Spend time with management describing the current labor market. Meanwhile, try to be creative in finding new colleagues by using a mix of new ads, social media, and teaching interns. Referral programs can also be really useful; it’s not a coincidence that large companies like Google use them as the first strategy for finding new colleagues.

“The test environment will be perfect”

The problem: Although the test environment should be a perfect replica of the production environment, if often falls short. I’ve experienced this to be the case with databases and third-party tools. Setting up a QA environment for them can be expensive and requires a lot of administration.

The solution: I’ve found it absolutely necessary to map the differences of the two environments thoroughly, with risks associated with each difference. I’ve introduced to my teams the concept of the “Szegedi rule” sententiously named after by myself: If there are an n number of differences between the two environments, there will be 2n issues among the test results. This means you should allot a tremendous amount of time for debugging, even if the majority of differences are false alarms.

“There’s no need for a test management tool”

The problem: The stakeholders who finance the projects don’t think a real test management tool is necessary.

In the 2017–18 ISTQB report, test metrics and test effort estimation were important to only 23.5 percent of the companies surveyed—but 62.5 percent of them use some kind of test management tool! I haven’t experienced it in any other way; aside from the blocking bugs, management wasn’t really interested in other test reports, no matter how many hours it took me to fabricate one. And I had to make them manually, since we were not using any professional test management tool.

The solution: Connect the purchase of such a tool to the start of a new major project, emphasizing the benefits of using one—with case studies, if possible. It will be much easier to get resources this way.

“I didn’t have time to read the test report”

The problem: During the hands-on or other meetings, you realize the stakeholders haven’t run through the test report at all. It’s even more annoying if you spent a serious amount of time preparing it. It’s no use blaming them—they just get too much info on a daily basis!

The solution: Assume stakeholders don’t read your reports, and prepare for meetings with notes highlighting the most important issues and metrics.

“These requirements are too complex”

The problem: The requirementsare too complex to check and validate. The new function should work well with the legacy functions, the cutting-edge solution should cover all bases and corner cases, and so on.

The solution: In these cases, I switch to my business analyst identity and map the new functions via good old fieldwork drawing diagrams and by gathering information from developers, other business analysts, and architects.

“Just automate all the tests”

The problem: Because of some badly written articles about the benefits of test automation, the stakeholders believe automation will solve every problem and find all bugs in the code.

The solution: Education is the only solution here. Tell the stakeholders how much work is needed to maintain the automated tests, assess the results, and develop new ones for new functions.

What are your most frequent and annoying challenges, and how do you deal with them? Let me know in the comments below!

User Comments

2 comments
Mark Bentsen's picture

Regarding the test management tool section. Let's assume you have one, the challenges I'm running into are how to transition from one to another. The one we have now is really suited for waterfall development. The other challenge is when you have two testing teams working together at some point in the project and determining what will be the system of record. There are solutions out there that allow for integrations across systems, but it's not as easy as 1-2-3. Thanks for the great article.

July 3, 2019 - 2:24pm
László Szegedi's picture

Companies usually deploy a Proof of Concept product to demonstrate how powerfool the new tool is. Although, I've read some analysis proving this method is not so powerful. They suggest to switch to a new tool once and for all, it reduces the vacillation and the possible forgetting of the transition.

About a common issue tracking system: I suggest a negotiation between the teams. Frankly, any system above using simple Excel sheets know quite the same, so you can have a secret win during these negotiations, even if you let the other party win.

Thanks for reading!

July 3, 2019 - 3:51pm

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.