The Bugs That Deceived Me

[article]
Summary:

Every time we look at the data, we perform an analysis that helps us make decisions—hopefully the right ones. In this article, Gil Zilberfeld describes a few traps where bug data misled him to make bad decisions. These traps are in the data itself, not the tools, and can lead us in the wrong direction.

When I started my software development career, I was introduced to the big QA database. “The bug store” was where the testers stored all the bugs they found as well as those found by customers. I never thought there’s another way to work, until I moved to Typemock.

As a startup, we could choose whatever tools we wanted, and in the beginning, we used a wiki. Later on as the product grew in features, and thankfully with customers, we started looking for other tools. When I became a product manager, I decided the best way to deal with them is with an Excel file.

As much as I’d like to dismiss the big bad bug database (it’s not an “agile” tool), I can see a lot of resemblance between the two. It’s not about how the tool manages the information, it’s how we perceive it. Every time we look at the data, we perform an analysis that helps us make decisions, hopefully the right ones.

It is possible to make wrong decisions. Along the way, I’ve picked up a few traps where bug data mislead me to make bad decisions. These traps are in the data itself, not the tools, and can lead us in the wrong direction.

The Age of Bugs
When we start testing a new project, all bugs are comparable. That means that we can apply our analysis at the same moment in time. We can differentiate between high-priority and low-priority bugs and decide to fix the former, because at the time of our analysis, the former looked like a must-fix.

Of the products I have tested, the worst versions always seem to be the initial versions. I’ve a few of those projects, and the bug databases quickly filled with high-priority bugs; we didn’t get to fix all of them, either. We “managed scope,” cut some corners, and released a product. We also didn’t have the nerve to remove the high-priority (and sometimes the low-priority) ones from our database. A year later, we still had high-priority bugs in our system.

Yes, the database was cluttered. The more open bugs we had, the longer that triage and re-analysis (or “grooming” in agile-speak) took. The real problem was the list of high-priority bugs contained both old and new high-priority bugs. The truth is, of course, that the old ones were not really high priority, but we still compared them as if they were.

The logical way to deal with these bugs, and the one I’ve adapted through the years, is to go back and re-prioritize. Recently though, I’m more inclined to deleting as many bugs that I don’t see us handling in the very near future. Some don’t even go into the database, because the team closes the loop quickly and decides together that these bugs can wait. It’s still a struggle, both internal and within the team (“We need to keep this, so we don’t forget how important it is”).

Big databases seem bigger every time you go back to them. Make them as small as possible by removing the less important stuff. It may take some cold decisions, but it will focus the team on what’s important.

Bug data doesn’t just risk our decision-making process about what to handle next. It can also point us away from where we can really improve our process quality. 

The Bug and Code Disconnect
I’ve managed bugs in different ways over the years. In all projects, they were never connected directly to the source code. This disconnect makes it hard to spot problems in specific parts of the code. The closest I got was the component level; I knew which component was more bug ridden than others. However, the code base was large and the information was not helpful in pinpointing problems. This was never a quantitative measure as bugs were usually tagged as belonging to components during analysis, but the real code changes were not logged. We could not rely on the tagging as a problem locator

Some application lifecycle management (ALM) tools do the connection: Once you have a work item for the bug, the code changes for the bug fix are kept under it. Yet, I found that extracting information from these tools is still hard and the information you get is partial.

Finding errors in the process around coding can save us loads of problems. We can avoid more bugs by diverting attention to the problem areas in coding, reviewing, and testing. I haven’t found a good tool for that yet, so I guess the solution is in the process; whatever tool you use, try to keep the bugs and related code connected and tagged correctly. If you can do that, you can do some very interesting analysis.

But that’s not all the data that gets lost.

The Lost Data
Here’s a shocker: all the bugs in our database were found during testing.

We officially call them bugs after we found them. But there are others that appear along the way that don’t get to that special occasion. These are the bugs that either the developer caught on the way, as part of coding, or the ones that were caught by the suite of automated tests.

“That’s what test suites are for, genius!”

Yes they are. And still, these unmentioned bugs can contribute to the same analysis. These bugs have been a blind spot for me. As I test the whole application, I don’t see them, and I’m definitely not aware of what happened before the code entered source control.

Because this data is lost, we‘re left with the bugs in the database.

To tell the truth, I’ve decidedly let this one go. Collecting all this information requires more attention, more data collection.

Instead, we discuss the big picture in a qualitative manner. Luckily, I work with a small team, and we do ongoing analysis of bugs as part of retrospectives. Although not accurate, these discussions help us to identify and handle the risky parts of the code.

More Data, Better Analysis
As a developer, I never thought about grouping bugs. When I found them in my code before anyone else, I didn’t even call it a bug. When I “grew up” and adopted a more encompassing point of view, I look at them differently.

Bug information doesn’t live in a vacuum. In agile we talk about context and how it’s part of information.

With a bug, we’re interested not just with the bug description, but also where and when it was found, how and who did the analysis, etc. We can then group bugs together to point us at quality problems.

Every once in a while, it helps to take a look at the big picture, not just look at the bugs individually. Bugs are usually symptoms of ineffective processes.

How Do I Start?
Start with simple questions about bugs like “Where do they come in droves?” and “Where do they not frequently appear?” After doing so, make a decision about what to track and follow up.

Then continue to ask questions. It’s not like these bugs are going away, are they?

User Comments

1 comment
Rob Black's picture

Are you also doing a root cause analysis and tracking the results of such?  For example, is the defect caused by a missed unit test, changing requirements, missing requirement, misunderstood requirement, non implemented feature, architectural breakdown, etc.  Do you also track defects in documentation that take away from time spent on the core deliverable application?  I'm a firm believer that rework costs money.  And rework can be in many different software lifecycle artifacts.  For some applications, or projects, the supporting artifacts are of great value.  For instance if a team is to deliver an SDK, the supporting documentation is very valuable in reducing calls for support.  In other efforts, the supporting artifacts may aid in certification or regulatory approval.  Whether this be from internal audits, external government or customer audits, etc.  Quality is measured by the customers perceived value of a product.  Yet the cost of quality is measured internally.

January 6, 2014 - 7:27pm

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.