Testing Wins Should Come through Mastery, Not Luck

[article]
Summary:
Bonnie Bailey writes that as testers, some of our track record will be pure luck—for better or for worse. We should, however, strive to test well enough that users must be crafty to cripple the software we stamp.

How often are software testers getting lucky?

By that, of course, I mean how often is our software simply not getting used and abused enough to show whether or not we’ve tested it well?

Take industrial control systems testing, a subject on the radar due to security incidents like the Stuxnet worm. While the Stuxnet attack was sophisticated, researchers and security hobbyists, like the man who pinged the whole Internet, have demonstrated that significantly less-complex vulnerabilities abound. That these vulnerabilities have not been exploited to more detriment of the public speaks not to real security or to the quality of the testing done but rather to the disinterest and perhaps patience of potential attackers.

In December 2012, Chinese hackers infiltrating a decoy water control system for a US municipality were caught by a research project aimed at exposing hacking groups that intentionally seek out and compromise water plant systems. According to security researchers, none of the attacks exposed by the project displayed a high level of sophistication. Researcher Kyle Wilhoit concluded, “These attacks are happening and the engineers likely don’t know.”

Vulnerabilities like these could permit attacks resulting in power outages, environmental damage, and loss of life, yet hacking industrial systems remains quite easy for hackers versed in the workings of them. Of three presentations on control system vulnerabilities at the Black Hat computer security conference earlier this month, all required significantly fewer resources and skill than what was needed for the Stuxnet operation.

But security is tough business, multi-faceted and fed by a small pool of technical talent. Add to that the low incentive value for energy operators and manufacturers of control systems to beef up security and it’s not that surprising that massive vulnerabilities exist.

Even where there’s huge liability and money on the line, however, projects still fail in the wild related to non-security bugs that testing should have caught.

J.P. Morgan, one of the largest banking and financial services firms in the world, lost its rear end over an Excel-based value at risk (VaR) modeling tool designed to help the chief investment officer (CIO) understand the level of risk the company was exposed to, which would help the bank make decisions about what trades to make and when. After a poorly positioned trade resulted in losses that totaled into the billions of dollars between April and June 2012, an investigative committee found that the tool that gave such horrendously bad advice had been poorly designed, developed, and tested.

A J.P. Morgan report detailed how it happened: “Specifically, after subtracting the old rate from the new rate, the spreadsheet divided by their sum instead of their average, as the modeler had intended. This error likely had the effect of muting volatility by a factor of two and of lowering the VaR…It also remains unclear when this error was introduced into the calculation.”

Even a rookie tester with half her brain tied behind her back should have found such a grossly incorrect formula that caused the firm’s VaR to be misreported by a factor of 50 percent. But the investigative committee found that “inadequate resources were dedicated to the development of the model…the model review group required only limited back-testing of the new model, and it insufficiently analyzed the results that were submitted.”

The committee’s report indicated a complete lack of rigor in testing by all who touched the modeler. “Data were uploaded manually without sufficient quality control,” the report claims. “Spreadsheet-based calculations were conducted with insufficient controls and frequent formula and code changes were made. Inadequate information technology resources were devoted to the process. Contrary to the action plan contained in the model approval, the process was never automated.”

The massive software failures we see from time to time and the known vulnerabilities of our systems bid the question: How solid is the house? Has our software been battered by the elements enough to prove that it is as solid as we think it is? In other words, are we as good at testing as we think we are, or are we just getting lucky?

Many of us have probably earned rock-star status the first week or two on a new job by finding a truckload of bugs on the existing codebase that the project’s testers never found. There’s something to be said for fresh eyes, but if a new tester is bouncing in bugs like a toddler in a ball bit, most of those will be low-hanging fruit and the heuristics and skills used to find them nothing more than elementary. Lest we be tempted to elevate ourselves as the crowd cheers, it shouldn’t be missed that many folks have simply never seen a good tester in the flesh before, and much of our software has never been rigorously tested.

Competition makes for bigger, faster, and stronger testing. The challenges are how we test our mettle and grow in hopes of rising to the occasion. But what if there are few challengers out there in the trenches? We don’t want to deploy weak software—like that used in industrial control systems—that looks tough on the outside but breaks down the moment it is pushed. How can we become better testers when we have no mentors in the flesh or peers to test? Besides, who can chastise us across the cubicles for missing an obvious bug?

One suggestion is to be proactive building your team, whether you are in a position of leadership or not. Buy a couple of testers lunch, grab a conference room, and teach them something while they are munching away. Type up a handout with the exact steps and commands needed to do something, like search for a specific transaction in a log. Break learning down into bite-sized chunks. Your team members will be energized by their new powers of testing and your skills will grow by teaching something that was formerly just in your head.

Another suggestion is to actively talk about your software to people outside your immediate test team. Hopefully, you are talking regularly to the developers, but if not, now’s the time to start. Talk to developers, database administrators, security pros, performance experts, and the server team about your project. Each of these people plays a specific position on the team. They may not be able to tell you what the guy playing first should be doing, but they can tell you everything about what the short stop should be doing because that’s the position he plays. Talking to a variety of IT pros rather than limiting your conversation to your circle of testers or even developers will help you figure out how to truly exercise your software and make it stronger before it gets deployed.

Start a blog and write about what you’re testing, minus any details you can’t give out. Pose questions on Twitter. Get a conversation going about some real testing effort you’re engaged in. Don’t be afraid to look like a non-expert—someone out there probably is a better tester than you because they’ve been in the game longer or encountered more. You can leverage this person’s experience to exercise your skills.

As testers, some of our track record will be pure luck—for better or for worse. We should, however, strive to test well enough that users must be crafty to cripple the software we stamp. Given the dearth of testers who could serve as mentors in the average organization, we can nonetheless, with conscientious effort, work to grow the herd for ourselves, our craft, and the future of testing.

User Comments

8 comments
Pete Dean's picture
Pete Dean

Excellent article!

"...actively talk about your software to people outside your immediate test team. ...Talk to developers, database administrators, security pros, performance experts, and the server team about your project. Each of these people plays a specific position on the team. They may not be able to tell you what the guy playing first should be doing, but they can tell you everything about what the short stop should be doing because that’s the position he plays."

Abolutely 100% spot on. I've been been doing this for years now. It's such a ridiculously simple suggestion, but has paid off for me so many times and saved my career more often than I care to admit (in public).

Look forward to reading future articles from you, please write more!

Regards

Peter

August 27, 2013 - 9:49am
Bonnie Bailey's picture
Bonnie Bailey

Thanks Peter! I appreciate the encouragement! :)

August 28, 2013 - 12:00pm
Mukesh Sharma's picture
Mukesh Sharma

Bonnie – You bring up an important and interesting discussion to the table. Testing as a discipline offers endless learning opportunities to the tester – be it about the product, technology, testing techniques, end user needs, how to collaborate with the team in helping them understand the quality goals etc. Along with your suggestions that you have listed to help the tester grow, here’s one from a slightly different angle that helps them promote good team collaboration and push quality upstream that I have found beneficial – the tester helps the team understand the product’s quality goals and empowers them to enhance quality through their efforts in possible ways. For e.g. working with the development team in building a set of unit tests, with the build team in giving them a set of automated smoke tests that can be run to verify a new build etc. Through these steps a pro-active tester is able to build better team collaboration, improve product quality and more importantly create more time for him/her to work on core testing tasks that will help build his/her mastery as well as product quality.

August 30, 2013 - 12:42am
Kimberly Rabbeni's picture

When I saw the title of this article I thought the author was completely out of their mind.  In all the years none of my "Testing Wins" were the result of luck.  I was quite insulted for all Test Engineers/Quality Assurance Engineers in the business.

When I graduated from college (1984), there were no accepted practices for Software Development let alone Testing.  We were making things up as we went along. Over time our processes became more standardized and our hard work was rewarded by very successful customer acceptance testing.  Our customer was the Navy.

Carnegie Mellon had been tasked with developing a method for evaluating Contractors competing for DOD contracts.  My group was asked to help develop some of the process requirements contractors would be evaluated. One of the major requirements was training programs for all aspects of a project. I was involved in evaluating a Unit Test training course (both the original and a revamped course based on our feedback). I was later asked to help develop a System Test training course and was asked co-teach the class.  Eventually, my co-teachers dropped out due to schedule conflictswith their projects. I ended up running the class by myself for more that a year. My management allowed me to take the time off our project because it was good for my professional development. Eventually, the class was turned over to someone else.

Our process included design and code inspections. For the design inspections, we reviewed the english language version of the unit along with all documentation and test cases for the unit testing. For code inspections, we reviewed the code (either higher level language or machine language) and more detailed test procedures.  Whenever a problem was found after the code had been implemented (usually in system test or after deployment), a problem report was initiated. Each problem was reviewed and a determination was made if there was a workaround or if it require patching the software.  Patching was done in machine language and went throught the same inspection process as original design/code.

It was required that a test engineer be present for all inspections.  The inspection could not be held under any circumstances if there was not a test engineer available.  

The last major upgrade I worked on was when we transitioned from a company propriatary processor to an intel processor.  This required us to retest all the functionality to insure that nothing was broken. I was the Software Test Manager for the project. In addition to managing a very small group of testers, I wrote test cases/procedures, reviewed test cases/procedures written by my team and ran tests at the Software test level (versus System Test which ran in the actual hardware - although I did that as well).

The code was written in FORTRAN (yes I am dating myself) and there was only vendor with a compiler for FORTRAN.  I spent almost as much time debugging the compiler as I did testing our code.

I know that anyone reading this is thinking that the government can (and does) spent a lot of money to ensure that there are no bugs and it can afford to throw money away on silly things like designing test cases.  Just before we started the conversion to an Intel processor, a new Software Manager started with my group. We already had a budget and schedule.  We were very sure about our schedule and knew that the total budget was accurate.  The only thing we weren't sure about was exactly how to divide up the budget.

We very firmly beleived that the money we spent on inspections was going to save us money in the testing phase. The new Software Manager kept having fits because he said we were spending to much money on the inspections and were going to be over budget.  He insisted that we suspend the inspections. He was over ruled by just about everyone. He also decided that he was going improve the metrics by closing all the open problem reports. I told him that he did not have the authority to close them. He told me he was the software manager and he could close them.  He very quickly found out that he could only recommend them for closure.  Closing a problem report required a recommendation for closure at each step in the process.  Since they hadn't been tested (because no solution was implemented) I did not recommend them for closure  and they stayed open.

The end result was that we finished the project 30 days early and $400,00 under budget.  The earlier in the process you find a problem the less expensive it is to fix.  Because we inspected the test cases along with the design of each unit, design flaws were caught before the design was coded. Because unit testing was comprehensive, there was no "low hanging fruit for the Software Test group to find (unit testing was done by the developer).  Problems found in Software Test were limited to the interface between units and there were very few because there were system documents which laid out the interfaces. When we got to System Test/Integration there were less than a handful of issues and they were problems that could only be found when running on the hardware.

Even though writing test cases may not have been the developers favorite task, they wrote better code because thinking about how to test their software identified design flaws or code issues.  The developers also took great pride in the fact that the Software Test group was unable to find any problems with their product. The Software Test group took great pride in the fact that when they passed the software to the System Test Group very few problems were found. I remember one of the System Engineers telling me that we did such a good job of testing that he was bored because all his test cases worked.

Your product dictates how much can be spend on testing. You can't spend the same amount of money to test an app as you could on say tax software or on software going into the space shuttle.

But certainly in the example you gave of JP Morgan, that was a very expensive bug for something that was realatively easy to test. The probe that was sent to Mars (I think) and never heard from again as the result of one unit using metric and another using non-metric was a very costly mistake.  

One year when I was doing our taxes, I found that TurboTax decided that I had paid too much in SocSec taxes.  My husband and I worked at the same company and TurboTax added our SecSec taxes together and compared it to the max for a single person and told me I paid too much. If I had not caught it, it would have been very costly to me. It wouldn't have cost TurboTax anything, but it was a bug that should have been found before the software was sold.

 

Testing is everyone's responsibility.

Sorry to go on and on, but this hit a nerve. Boy do I miss testing. I find it a lot of fun.

 

February 5, 2014 - 12:42am
Madhava Verma Dantuluri's picture

Wonderful article, testing the industrial systems can take lot of toll from QA team. Because risk factors are accountable in natural human factors.

February 13, 2014 - 10:09pm
Hardik Gandhi's picture

excellent article...very informative for a wannabe tester like me...

February 21, 2014 - 1:39pm
Munish Bhalla's picture

Thanks for sharing, This was a very informative article and clearly highlighed the role testing plays in a Project.

Following is the link to access IT & NON IT Project Failure examples:

http://calleam.com/WTPF/?page_id=3

 

October 23, 2014 - 8:27pm
Ali Khalid's picture

I agree with the picture depicted of masses of the testers.

November 19, 2014 - 10:37am

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.