Coming up with definitions for the terms we testers use is a tough job. People working on different projects withinthe same organization will use different vocabulary, or, even worse, the same vocabulary with different meanings. This can happen even with acronyms. And that is just within one organization—what about when we cross different industries and different countries? The problem becomes huge. Without a common vocabulary, we end up wasting time we cannot afford to lose.
This is to be expected; we are a young industry. Compared to other industries, IT is not only the new kid on the block but we are also the kid who is involved in pretty much everything else that is going on in the neighborhood. We are too busy testing system releases to spend much time defining terms. I have developed definitions for the IEEE 829-2008 Standard for Software and System Test Documentation, and I can report that it is a brutal process. We just don't agree.
Something that has fascinated me for a while is that there are several very helpful terms in common use that rarely—or never—make it into the "official" definition documentation. So I've come up with a list of terms I think every tester should know.
CYA (Cover Your Assets): Nah, nah—you weren't expecting that, were you? This is a part of corporate survival. As testers, we face an infinite challenge with finite resources, so preparing for the inevitable is a pretty good idea. How many times has someone asked about a defect found in production: "Why didn't you test that?” The scientific answer is, "You have no idea how lucky you are that there aren't more." Unfortunately, that is a very bad time to point out the truth, so advanced preparation (e.g., the testing schedule was cut twice to half of the original estimate) is a pretty good idea. This is best done with metrics. They will help no matter what—even if things are too politically hot to allow you to publicize the metrics. One of the key rules of metrics is "Never lie to yourself." Tell others what you have to, but keep your own truth so that you can learn and do better in the future.
GIGO (Garbage In, Garbage Out): This acronym has been around for a long time. I was a developer on some of those awful systems that started with no or incredibly vague requirements and built "something" that the end-users just hated. And guess what? We are still doing this on some projects! As an industry, this is still our sticking point. If we have good requirements, we know how to deliver a good system. If we have bad or incomplete requirements, we will not do as well and we'll have more rework.
Grey Box: Considering both white box and black box aspects at the same time while designing test cases. Given typical schedule pressures, testers need to make as much progress as possible for each activity. People who think that you have time to contemplate just black box or just white box in isolation have never had a real job.
Happy Path: The most used functions in a system, with nothing unusual or wrong happening. This is best to test first, as everyone is much more likely to be happy if it works.
Low-Hanging Fruit: A task that will produce a lot of results with a relatively small amount of effort. It looks good and boosts our morale.
Plagiarism: When you copy from one source. Don't do it—footnote and stay legal. (See also Research)
Research: When you copy from multiple sources. Still needs footnotes. (See also Plagiarism)