Testing Wins Should Come through Mastery, Not Luck

[article]

Contrary to the action plan contained in the model approval, the process was never automated.”

The massive software failures we see from time to time and the known vulnerabilities of our systems bid the question: How solid is the house? Has our software been battered by the elements enough to prove that it is as solid as we think it is? In other words, are we as good at testing as we think we are, or are we just getting lucky?

Many of us have probably earned rock-star status the first week or two on a new job by finding a truckload of bugs on the existing codebase that the project’s testers never found. There’s something to be said for fresh eyes, but if a new tester is bouncing in bugs like a toddler in a ball bit, most of those will be low-hanging fruit and the heuristics and skills used to find them nothing more than elementary. Lest we be tempted to elevate ourselves as the crowd cheers, it shouldn’t be missed that many folks have simply never seen a good tester in the flesh before, and much of our software has never been rigorously tested.

Competition makes for bigger, faster, and stronger testing. The challenges are how we test our mettle and grow in hopes of rising to the occasion. But what if there are few challengers out there in the trenches? We don’t want to deploy weak software—like that used in industrial control systems—that looks tough on the outside but breaks down the moment it is pushed. How can we become better testers when we have no mentors in the flesh or peers to test? Besides, who can chastise us across the cubicles for missing an obvious bug?

One suggestion is to be proactive building your team, whether you are in a position of leadership or not. Buy a couple of testers lunch, grab a conference room, and teach them something while they are munching away. Type up a handout with the exact steps and commands needed to do something, like search for a specific transaction in a log. Break learning down into bite-sized chunks. Your team members will be energized by their new powers of testing and your skills will grow by teaching something that was formerly just in your head.

Another suggestion is to actively talk about your software to people outside your immediate test team. Hopefully, you are talking regularly to the developers, but if not, now’s the time to start. Talk to developers, database administrators, security pros, performance experts, and the server team about your project. Each of these people plays a specific position on the team. They may not be able to tell you what the guy playing first should be doing, but they can tell you everything about what the short stop should be doing because that’s the position he plays. Talking to a variety of IT pros rather than limiting your conversation to your circle of testers or even developers will help you figure out how to truly exercise your software and make it stronger before it gets deployed.

Start a blog and write about what you’re testing, minus any details you can’t give out. Pose questions on Twitter. Get a conversation going about some real testing effort you’re engaged in. Don’t be afraid to look like a non-expert—someone out there probably is a better tester than you because they’ve been in the game longer or encountered more. You can leverage this person’s experience to exercise your skills.

As testers, some of our track record will be pure luck—for better

User Comments

8 comments
Pete Dean's picture
Pete Dean

Excellent article!

"...actively talk about your software to people outside your immediate test team. ...Talk to developers, database administrators, security pros, performance experts, and the server team about your project. Each of these people plays a specific position on the team. They may not be able to tell you what the guy playing first should be doing, but they can tell you everything about what the short stop should be doing because that’s the position he plays."

Abolutely 100% spot on. I've been been doing this for years now. It's such a ridiculously simple suggestion, but has paid off for me so many times and saved my career more often than I care to admit (in public).

Look forward to reading future articles from you, please write more!

Regards

Peter

August 27, 2013 - 9:49am
Bonnie Bailey's picture
Bonnie Bailey

Thanks Peter! I appreciate the encouragement! :)

August 28, 2013 - 12:00pm
Mukesh Sharma's picture
Mukesh Sharma

Bonnie – You bring up an important and interesting discussion to the table. Testing as a discipline offers endless learning opportunities to the tester – be it about the product, technology, testing techniques, end user needs, how to collaborate with the team in helping them understand the quality goals etc. Along with your suggestions that you have listed to help the tester grow, here’s one from a slightly different angle that helps them promote good team collaboration and push quality upstream that I have found beneficial – the tester helps the team understand the product’s quality goals and empowers them to enhance quality through their efforts in possible ways. For e.g. working with the development team in building a set of unit tests, with the build team in giving them a set of automated smoke tests that can be run to verify a new build etc. Through these steps a pro-active tester is able to build better team collaboration, improve product quality and more importantly create more time for him/her to work on core testing tasks that will help build his/her mastery as well as product quality.

August 30, 2013 - 12:42am
Kimberly Rabbeni's picture

When I saw the title of this article I thought the author was completely out of their mind.  In all the years none of my "Testing Wins" were the result of luck.  I was quite insulted for all Test Engineers/Quality Assurance Engineers in the business.

When I graduated from college (1984), there were no accepted practices for Software Development let alone Testing.  We were making things up as we went along. Over time our processes became more standardized and our hard work was rewarded by very successful customer acceptance testing.  Our customer was the Navy.

Carnegie Mellon had been tasked with developing a method for evaluating Contractors competing for DOD contracts.  My group was asked to help develop some of the process requirements contractors would be evaluated. One of the major requirements was training programs for all aspects of a project. I was involved in evaluating a Unit Test training course (both the original and a revamped course based on our feedback). I was later asked to help develop a System Test training course and was asked co-teach the class.  Eventually, my co-teachers dropped out due to schedule conflictswith their projects. I ended up running the class by myself for more that a year. My management allowed me to take the time off our project because it was good for my professional development. Eventually, the class was turned over to someone else.

Our process included design and code inspections. For the design inspections, we reviewed the english language version of the unit along with all documentation and test cases for the unit testing. For code inspections, we reviewed the code (either higher level language or machine language) and more detailed test procedures.  Whenever a problem was found after the code had been implemented (usually in system test or after deployment), a problem report was initiated. Each problem was reviewed and a determination was made if there was a workaround or if it require patching the software.  Patching was done in machine language and went throught the same inspection process as original design/code.

It was required that a test engineer be present for all inspections.  The inspection could not be held under any circumstances if there was not a test engineer available.  

The last major upgrade I worked on was when we transitioned from a company propriatary processor to an intel processor.  This required us to retest all the functionality to insure that nothing was broken. I was the Software Test Manager for the project. In addition to managing a very small group of testers, I wrote test cases/procedures, reviewed test cases/procedures written by my team and ran tests at the Software test level (versus System Test which ran in the actual hardware - although I did that as well).

The code was written in FORTRAN (yes I am dating myself) and there was only vendor with a compiler for FORTRAN.  I spent almost as much time debugging the compiler as I did testing our code.

I know that anyone reading this is thinking that the government can (and does) spent a lot of money to ensure that there are no bugs and it can afford to throw money away on silly things like designing test cases.  Just before we started the conversion to an Intel processor, a new Software Manager started with my group. We already had a budget and schedule.  We were very sure about our schedule and knew that the total budget was accurate.  The only thing we weren't sure about was exactly how to divide up the budget.

We very firmly beleived that the money we spent on inspections was going to save us money in the testing phase. The new Software Manager kept having fits because he said we were spending to much money on the inspections and were going to be over budget.  He insisted that we suspend the inspections. He was over ruled by just about everyone. He also decided that he was going improve the metrics by closing all the open problem reports. I told him that he did not have the authority to close them. He told me he was the software manager and he could close them.  He very quickly found out that he could only recommend them for closure.  Closing a problem report required a recommendation for closure at each step in the process.  Since they hadn't been tested (because no solution was implemented) I did not recommend them for closure  and they stayed open.

The end result was that we finished the project 30 days early and $400,00 under budget.  The earlier in the process you find a problem the less expensive it is to fix.  Because we inspected the test cases along with the design of each unit, design flaws were caught before the design was coded. Because unit testing was comprehensive, there was no "low hanging fruit for the Software Test group to find (unit testing was done by the developer).  Problems found in Software Test were limited to the interface between units and there were very few because there were system documents which laid out the interfaces. When we got to System Test/Integration there were less than a handful of issues and they were problems that could only be found when running on the hardware.

Even though writing test cases may not have been the developers favorite task, they wrote better code because thinking about how to test their software identified design flaws or code issues.  The developers also took great pride in the fact that the Software Test group was unable to find any problems with their product. The Software Test group took great pride in the fact that when they passed the software to the System Test Group very few problems were found. I remember one of the System Engineers telling me that we did such a good job of testing that he was bored because all his test cases worked.

Your product dictates how much can be spend on testing. You can't spend the same amount of money to test an app as you could on say tax software or on software going into the space shuttle.

But certainly in the example you gave of JP Morgan, that was a very expensive bug for something that was realatively easy to test. The probe that was sent to Mars (I think) and never heard from again as the result of one unit using metric and another using non-metric was a very costly mistake.  

One year when I was doing our taxes, I found that TurboTax decided that I had paid too much in SocSec taxes.  My husband and I worked at the same company and TurboTax added our SecSec taxes together and compared it to the max for a single person and told me I paid too much. If I had not caught it, it would have been very costly to me. It wouldn't have cost TurboTax anything, but it was a bug that should have been found before the software was sold.

 

Testing is everyone's responsibility.

Sorry to go on and on, but this hit a nerve. Boy do I miss testing. I find it a lot of fun.

 

February 5, 2014 - 12:42am
Madhava Verma Dantuluri's picture

Wonderful article, testing the industrial systems can take lot of toll from QA team. Because risk factors are accountable in natural human factors.

February 13, 2014 - 10:09pm
Hardik Gandhi's picture

excellent article...very informative for a wannabe tester like me...

February 21, 2014 - 1:39pm
Munish Bhalla's picture

Thanks for sharing, This was a very informative article and clearly highlighed the role testing plays in a Project.

Following is the link to access IT & NON IT Project Failure examples:

http://calleam.com/WTPF/?page_id=3

 

October 23, 2014 - 8:27pm
Ali Khalid's picture

I agree with the picture depicted of masses of the testers.

November 19, 2014 - 10:37am

About the author

Bonnie Bailey's picture Bonnie Bailey

Bonnie Bailey is a software test engineer for a health care information technology company. Bonnie is an avid reader of fiction and non-fiction, including software design, testing and development, disruptive and emerging technologies, business leadership, science, and medicine. She also enjoys writing. Find Bonnie online at bbwriting.com.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!