The one thing that is crystal clear with respect to software security is that it isn't done well. Security bugs and design deficiencies that allow digital information to be stolen or tampered with are far too prevalent. As testing professionals, we have a big problem, and a big opportunity, on our hands. Learn ways to find security vulnerabilities in your system.
The one think that is crystal clear with respect to software security is that it isn't done well. Security bugs and design deficiencies that allow digital information to be stolen and software to be tampered with are far too prevalent. As testing professionals, we have a big problem, and a big opportunity, on our hands.
Why an opportunity? Testing has the unique ability to generate knowledge about security vulnerabilities. Think about it. The whole point of testing is that we don't know whether our software is reliable (or secure, trustworthy, safe, etc.), so we test to gather the best information we can. During this information-gathering process we learn. We learn about our software, we learn about our testing discipline, and we gain understanding of quality, security, and many other aspects of our subjects that would otherwise remain hidden. By exposing these vulnerabilities, we testers can give developers keys to designing more secure systems.
Understanding Security Vulnerabilities
So how is a security bug born? Much differently from the more familiar functionality bug. Requirements analyses and specifications generally document software behaviors that should be implemented in the software under test. This is called the software’s intended functionality . As testers are painfully aware, the actual functionality that gets implemented in the code rarely covers the intended functionality completely. This is where testers come in: We find the things that should have been implemented but were not, as well as the things that were implemented incompletely or incorrectly.
But security bugs are not usually found in areas of incomplete functionality. Indeed, actual functionality can completely cover intended functionality and security holes can still exist. How can this be possible? Most security bugs are the unintentional side effects of implemented functionality. They sneak in because the entire software development culture is so focused on what the software should do—its features—that it fails to consider what it shouldn’t do. With a security bug, the software does everything we expect it to do; the problem is it may do more than we want.
Imagine a password entry screen that stores plain text passwords in memory for too long. Although the application may do everything it is supposed to do, this one side effect can have devastating security implications. Imagine a music player that leaves unencrypted music artifacts on the hard drive during playback, deleting them only when the music stops. The player works fine to the casual user, but hackers will be all too happy to take advantage of the pirating opportunity provided by this side-effect behavior. Imagine an encryption algorithm that passes information pertaining to the encryption key across the boundary of its executable (by writing it out to the registry for example). The encryption may work perfectly, but the side effect of exposing the key is a vulnerability.