At some point in the software development lifecycle, regardless of which model you use, we have to make some tough decisions. What defects do we fix? Which should we let go? How do we decide? Triage is one way!
Triage. If you're a fan of the TV show M*A*S*H then you're probably familiar with the term "Triage." It’s also a concept we can apply to software testing.
According to the Wikipedia:
"Triage is a system used by medical or emergency personnel to ration limited medical resources when the number of injured needing care exceeds the resources available to perform care so as to treat the greatest number of patients possible."
Triage is actually a French word meaning "Sorting." In medical triage, patients, on a battlefield or at the scene of an emergency, are evaluated to determine which are in need of immediate care, and which can wait. At times, doctors or medics may decide the severely or critically injured should not receive immediate care because they are not likely to survive and will more than likely tie up scarce resources that could be used to save others.
So how does that apply to software testing? Let's change the definition a bit.
"Triage is a system used by software development teams to ration limited technical resources when the number of defects needing resolution exceeds the resources available to correct and verify them so as to resolve the greatest number of defects possible."
If there is a concept that testers and test managers are acutely familiar with, it's "limited resources." Unfortunately we can't fix and retest everything in the limited amount of time we have or with current resources. We just ask for more time or more people right? Riiiiight!
So, how can we make the best use of the limited time and people we have? Triage!
Severity and Priority
Successful Triage requires use of 2 similar yet very different concepts: Severity and Priority. With the Triage system, each defect is assigned both a Severity and a Priority. Many defect tracking systems use one or both of these concepts. But they are sometimes used interchangeably. They are really two separate and distinct concepts. Let's take a closer look.
Severity is used to define the impact that a defect has on the user of the application, or customer. Impact is probably a better term. We assign Severity levels to defects to define the seriousness of the problem.
So how many levels do you need? It's like the story of Goldilocks and the Three Bears–6 is too many, 3 are too few.
Personally, I like to use an even number of Severity levels. With an odd number of levels, like the typical 1-5, too many defects tend to get put on the fence or in the middle (severity=3). With an even number (like 1-4), you indirectly force a decision. No fence-sitting. Can you have too many levels? Absolutely! Too few? Of course. If you have too many levels, managing defects becomes a nightmare. Too few and you may not be able to fully define the impact of the defect. I worked with one system that had something like 15 Severity levels (3 levels of critical, 3 levels of severe, and so on). After a while Severity just became meaningless.
I try to avoid using just a number to define Severity levels. I typically include a brief description with the number (1-Critical, 2-Severe, 3-Cosmetic, etc.). Numerical rankings alone can be confusing (is a 1 the most severe or a 4) One major test tool vendor uses 5 as the most severe and 1 as the least, another vendor uses the complete opposite. Regardless of how you rank them, it's also important to define and document the criteria for assigning each severity level to a defect. Severity levels are initially assigned by a