The Bugs That Deceived Me

[article]
Summary:

Every time we look at the data, we perform an analysis that helps us make decisions—hopefully the right ones. In this article, Gil Zilberfeld describes a few traps where bug data misled him to make bad decisions. These traps are in the data itself, not the tools, and can lead us in the wrong direction.

When I started my software development career, I was introduced to the big QA database. “The bug store” was where the testers stored all the bugs they found as well as those found by customers. I never thought there’s another way to work, until I moved to Typemock.

As a startup, we could choose whatever tools we wanted, and in the beginning, we used a wiki. Later on as the product grew in features, and thankfully with customers, we started looking for other tools. When I became a product manager, I decided the best way to deal with them is with an Excel file.

As much as I’d like to dismiss the big bad bug database (it’s not an “agile” tool), I can see a lot of resemblance between the two. It’s not about how the tool manages the information, it’s how we perceive it. Every time we look at the data, we perform an analysis that helps us make decisions, hopefully the right ones.

It is possible to make wrong decisions. Along the way, I’ve picked up a few traps where bug data mislead me to make bad decisions. These traps are in the data itself, not the tools, and can lead us in the wrong direction.

The Age of Bugs
When we start testing a new project, all bugs are comparable. That means that we can apply our analysis at the same moment in time. We can differentiate between high-priority and low-priority bugs and decide to fix the former, because at the time of our analysis, the former looked like a must-fix.

Of the products I have tested, the worst versions always seem to be the initial versions. I’ve a few of those projects, and the bug databases quickly filled with high-priority bugs; we didn’t get to fix all of them, either. We “managed scope,” cut some corners, and released a product. We also didn’t have the nerve to remove the high-priority (and sometimes the low-priority) ones from our database. A year later, we still had high-priority bugs in our system.

Yes, the database was cluttered. The more open bugs we had, the longer that triage and re-analysis (or “grooming” in agile-speak) took. The real problem was the list of high-priority bugs contained both old and new high-priority bugs. The truth is, of course, that the old ones were not really high priority, but we still compared them as if they were.

The logical way to deal with these bugs, and the one I’ve adapted through the years, is to go back and re-prioritize. Recently though, I’m more inclined to deleting as many bugs that I don’t see us handling in the very near future. Some don’t even go into the database, because the team closes the loop quickly and decides together that these bugs can wait. It’s still a struggle, both internal and within the team (“We need to keep this, so we don’t forget how important it is”).

Big databases seem bigger every time you go back to them. Make them as small as possible by removing the less important stuff. It may take some cold decisions, but it will focus the team on what’s important.

Bug data doesn’t just risk our decision-making process about what to handle next. It can also point us away from where we can really improve our process quality. 

The Bug and Code Disconnect
I’ve managed bugs in different ways over the years. In all projects, they were never connected directly to the source code. This disconnect makes it hard to spot problems in specific parts of the code. The closest I got was the component level; I knew which component was more bug ridden than others. However, the code base was large and the information was not helpful in pinpointing problems. This was never a quantitative measure as bugs were usually tagged as belonging to components during analysis, but the real code changes were not logged. We could not rely on the tagging as a problem locator

Some application lifecycle management (ALM) tools do the connection: Once you have a work item for the bug, the code changes for the bug fix are kept under it. Yet, I found that extracting information from these tools is still hard and the information you get is partial.

Finding errors in the process around coding can save us loads of problems. We can avoid more bugs by diverting attention to the problem areas in coding, reviewing, and testing. I haven’t found a good tool for that yet, so I guess the solution is in the process; whatever tool you use, try to keep the bugs and related code connected and tagged correctly. If you can do that, you can do some very interesting analysis.

But that’s not all the data that gets lost.

The Lost Data
Here’s a shocker: all the bugs in our database were found during testing.

We officially call them bugs after we found them. But there are others that appear along the way that don’t get to that special occasion. These are the bugs that either the developer caught on the way, as part of coding, or the ones that were caught by the suite of automated tests.

“That’s what test suites are for, genius!”

User Comments

1 comment
Rob Black's picture

Are you also doing a root cause analysis and tracking the results of such?  For example, is the defect caused by a missed unit test, changing requirements, missing requirement, misunderstood requirement, non implemented feature, architectural breakdown, etc.  Do you also track defects in documentation that take away from time spent on the core deliverable application?  I'm a firm believer that rework costs money.  And rework can be in many different software lifecycle artifacts.  For some applications, or projects, the supporting artifacts are of great value.  For instance if a team is to deliver an SDK, the supporting documentation is very valuable in reducing calls for support.  In other efforts, the supporting artifacts may aid in certification or regulatory approval.  Whether this be from internal audits, external government or customer audits, etc.  Quality is measured by the customers perceived value of a product.  Yet the cost of quality is measured internally.

January 6, 2014 - 7:27pm

About the author

Gil Zilberfeld's picture Gil Zilberfeld

Gil Zilberfeld has been in software since childhood, writing BASIC programs on his trusty Sinclair ZX81. With more than twenty years of developing commercial software, he has vast experience in software methodology and practices.

Gil is an agile consultant, applying agile principles over the last decade. From automated testing to exploratory testing, design practices to team collaboration, scrum to kanban, and lean startup methods – he’s done it all. He is still learning from his successes and failures.

 Gil speaks frequently in international conferences about unit testing, TDD, agile practices and communication. He is the author of "Everyday Unit Testing", blogs at http://www.gilzilberfeld.com and in his spare time he shoots zombies, for fun.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!