Noel: That's a really amazing amount of research. One of the many great topics that you cover in this book deals with "forcing unusual bug cases." Two questions for you: What makes certain bug cases "unusual," and secondly, what are some ways that these are "forced" out into the light?
Jon: So, for the first question: Embedded systems, now even mobile smartphones, are often controlling something, such as hardware, airplanes, cars, and even hearts, in the case of pacemakers. Control of these things is done in the “real” world, which is basically complex, if not infinitely complex, in the varieties of situations. Programmers try to anticipate these situations, but obviously they cannot address every situation and program for it. So, unusual bug cases can hide in at least two ways. First is the case where the developers just miss a likely scenario of control—say, for example, a case where the stability control system of a car must handle a slow skid in a turn on ice. If the logic to handle this case is not coded, the system has a case, maybe not overly common, which is not handled.
The second case is where the code contains the logic to handle a situation but some logic was not coded correctly, and the team never runs a test with exactly the right conditions to trigger the bug. It is estimated that in many embedded systems, upwards of 80 percent of the code is dealing with such “unusual” situations. This is where the bugs hide, waiting for just the right situation to happen before they are exposed. Therefore, many of the attacks in the book deal with patterns to force test into these types of “unusual” situations.
In terms of the second question, in the book there are numerous stories and sidebar examples of where more testing might have forced the bug to be found. I mentioned the stability control system of a car, which was a real case. Bugs like this can cause recalls, which are expensive and can drain company profits, and cause bad publicity, which is maybe worse. So, how much more “good” attack tests might have been justified is a question that projects must consider. In the situation where some segment of code has not been tested under the right conditions, I can cite the story from the book of the patriot missile system, which had been in use for years but when left on for long periods of time had an error factor which accumulated over time, with the result that the system failed and people died. Who wants to be the tester that worked on that missile just before that event?
Hopefully, the attacks in my book will help testers to see differently, to try new things (i.e., tests)—and in the end, companies will profit and software will be better.
Noel: I'm not just trying to state the obvious here, but how vital is quality software testing in relation to embedded devices? We've gone from software in our phones to our cars, medical devices, our entire homes. There's this great race to essentially embed everything. Do you feel like those organizations and corporations that are having these devices developed are taking the testing of these devices seriously enough?
Jon: First let me say that many organizations I have worked with or supported take testing of these kinds of devices that you mention very seriously. When there is large risk in the form of losses such as money, life, or other resources, much good testing gets done. Many companies operate under regulator rules or threat of legal actions. But what my study of bugs that made it into the field and usage showed is that there were patterns in the errors where we as software people can do a better job. Developers and testers both missed perhaps hard-to-find bugs. In other cases, maybe some industry areas where they were less than diligent in their testing, different bug patterns appeared in the taxonomy. In both of these situations from the taxonomy, my study indicated to me that improvements could be made by inclusion of risk-based attack test patterns. After I did a detailed public error taxonomy study looking at years of data across several embedded domains, I came to the idea that attack-based testing could offer an improvement of ideas that a tester could consider.