For an example of how Java provides opportunities for security breaches, imagine that you are programming in Java and the names of the methods you are invoking are specified dynamically (and are thus unknown until runtime) because you are using reflection to invoke methods. If clients pass certain parameters in this situation, they might be able to invoke methods that you would not expect to be accessible (based on a typical analysis of the code).
You can still get into trouble with Java-based services even if you're not using reflection and all your method invocations are explicit. One way to get into trouble is to leave an opening through which a hacker can insert a jar file onto your machine or into your classpath. If the hacker successfully adds a jar file, all client method invocations will call the hacker's methods instead of the service's original methods. Uncaught runtime exception handling code can open another security hole. For an example of how uncaught runtime exceptions can lead to security breaches, imagine that a client receives an uncaught runtime exception which is thrown up several layers in your call stack before it is handled, and the handling code exposes some functionality that you did not want exposed. It would have been difficult for you to predict this behavior because it stems from exception handling code that is far removed from the service you are handling.
While hackers can occasionally access the inner workings of a traditional application (for example, by causing memory overwrites or exceptions), it is markedly easier for hackers to do so with Web services because Web services allow the initial access into the application. If you have a traditional application, hackers trying to access the parts of the program you want to protect would have to do something comparable to picking the lock on your home's front door, then locating your private cash stash. With Web services, you hand the crook the key to the house and hope that he doesn't stumble upon something you don't want him to take. Fortunately, you can cut off access to private areas of the application by establishing security boundaries within the Web service. A solid security boundary will protect the private areas of the application like a vault protects the items locked within it; when you have such a boundary/vault, you can rest assured that whoever gains access to your service/house will not be able to touch the methods/items you are trying to protect.
Verifying Inner Security Boundaries with Unit Testing
I've found that unit testing is one of the best ways to ensure that the parts of your application that you intend to be protected are actually protected. By "unit testing," I mean "testing the smallest unit of an application (a class in Java or a function in C), module, or submodule apart from the rest of the system."
Unit testing is helpful for this type of security testing because when developers and testers test at the unit level, it is considerably easier for them to test all of the possible paths that hackers could take during their attempts to reach unexposed methods or perform illegal operations. Developers sometimes make dangerous assumptions such as "There is no way to reach Method Dthough Method A." Unit testing—in particular, white-box testing (trying to fully exercise all paths through the unit with a wide range of unexpected inputs)—is probably the best way to verify that these assumptions are correct.
White-box unit testing involves designing inputs that thoroughly exercise the exposed methods, then examining how the application handles the test inputs. For example, if you wanted to check if any possible uncaught runtime exceptions cause a service to expose "protected" methods, you would flood the service's exposed methods with a wide variety of inputs to try to flush out all possible exceptions, then examine how the service responds to each exception. If you wanted to verify that hackers could not place Java .jar files in your application and/or CLASSPATH, you would design test cases that attempt to add such files through every possible service entry point, then see whether these attempts fail. If you find these or other security holes during the testing phase, you have the opportunity to fix the problem before an actual security breach occurs.
How do you determine how much testing is enough? Ideally, you want to check whether any possible input causes unexpected access, but testing every possible input to a method is typically not feasible. A more practical goal is to try to cover each path through the unit at least once.