Vendors try to protect their source code from would-be attackers, but it takes only one chink in the armor for a good reverse engineer to penetrate all the defenses so carefully put in place. Find out how to methodically uncover patterns to help you predict where the attacks will be focused and how they will be carried out.
On 1 April 2001 a United States EP-3E aries surveillance plane collided with a Chinese fighter seventy miles off the Chinese island of Hainan. The EP-3E lost altitude but was able to recover. It landed about twenty minutes later at China's Lingshui air base on Hainan. The Chinese government had the EP-3E in its possession for several days. On its return to the USA, the US intelligence community concluded that its encrypted signals collection capability had been compromised. How? Through some good software reverse engineering done by the Chinese. Undoing the damage cost billions of dollars.
What Is Software Reverse Engineering?
Reverse engineering is a way of examining a completed product in order to figure out how it works. When a plane or a missile falls into enemy hands, the fear is that the enemy will have time to take it apart and learn its secrets. After all, it's much easier to devise a way to defeat a plane, missile, or surveillance system when you know its strengths and weaknesses. That's why military specs are top secret. But it's not only mechanical technologies that can be taken apart and examined—software can be reverse engineered, too. Software reverse engineering is more complicated because software is inherently obscure. If you examine a program on disk or in memory, it displays as binary code: nothing but ones and zeroes. This inherent difficulty hasn't proved difficult enough to stop would-be attackers, though. People who attack binary code are usually highly skilled reverse engineers using powerful tools that enable them to understand what otherwise would be meaningless machine code.
Vendors have tried to make it more difficult to interpret binary code by adding various explicit protections. For example, to foil a reverse engineer who's trying to understand the sequence of logical decisions a program makes, the vendor could run it through an "obfuscator" that obscures the orderly flow of decisions. But explicit protections can only do so much. A successful defensive technology must thwart each and every attack, whereas a reverse engineer has to find only one chink in the armor. For instance, the security reference monitor succeeded in foiling many attacks against Windows NT; however, one lowly user changed two well-placed bytes in process memory and was able to gain full administrator privileges. Firewalls make no 100 percent effective claims; they've been broken. Embedded systems may thwart the casual hacker, but it's no secret that such code usually presents less of a challenge to skilled attackers. The odds are weighted heavily against the vendors.
A Methodical Approach
The methodical process I propose will provide the developers and evaluators of defensive technologies a means of assessing future attacks on executable code before it is deployed in the field. It's an offensive strategy that guides a testing organization and focuses its efforts to understand how attackers will target the defensive functionality of an application. As you'll see, fully executing the process requires a great deal of knowledge that can only be acquired through experience. But, even if your product doesn't require extreme security, the structure of the approach should prove valuable.
The first step in the process is to use a cause-effect diagram to detail the features of the protection scheme that could be broken. Such diagrams show the possible ways attackers could produce a particular effect. Next, attack modeling uses diagrams called attack trees to show the series of effects an attacker would have to compose to break through the protection schemes in place. Suppose an attacker seeks to obtain binary data secured on disk. Along the way there may be many defensive points to