too simple to be worth a test, easy enough to validate by inspection, and not likely to be incorrect as the methods were (initially) generated by the IDE. But, as code evolved, people forgot to review the code to be sure that the methods were in sync. While code reviews, whether through pair programming or a more structured process, can be valuable for identifying unanticipated problems, an automated test is quicker and more reliable than visual inspection for validating that your code honors low-level contracts that you already know about. Tests like these are simple to write and can more than pay for themselves if they prevent one day of your team puzzling over an obscure problem.
Think Outside the Code
Another issue, especially with frameworks that rely heavily on configuration, is that the configuration you use for developer testing will, of necessity, differ from that you use in a deployment situation. For example, a developer test may use a simplified database interface, while code that is deployed in a web container will use a real database connection. Unit tests may pass with flying colors, but the application will not run. While it is good practice to start up the complete application before committing code, it is still helpful to have an automated way of identifying common configuration mistakes.
In one project I worked on, we had a good suite of unit tests with excellent code coverage and a good record of builds passing. Yet, every so often, a build that passed the unit tests would not start up when deployed as part of a web application. The failure message was obscure, and often a number of people were blocked when the problem occurred. We traced the problem to errors in the deployment configuration (either a typo or, more often, a reference to a class that was incorrect). While we could have written an integration test to make sure that the application worked, this would have been difficult to run in the context of our integration build. We wrote a simple test that identified the problem by validating that the Spring configuration loaded successfully. After this seemingly low-value, low-effort test, we rarely had problems.
In another case, we traced recurring application startup errors to syntax errors in a large configuration file that was edited by hand. A test that validated the XML file using a validating parser allowed whoever made a change to quickly identify errors and fix them.
In both cases, a simple test caught a problem that had stopped the application from running. These sorts of errors slow down anyone who updates his code and cause people to be reluctant to work with the latest code, which is a barrier to continuous integration. Both of these problems could have been easily caught by "inspection" or "being careful," but programmers are human and mistakes slip by even the most detail-oriented developer, especially when there are a few moving parts. In the end, programmer time is better spent identifying design issues than checking code for syntax errors.
Some of these cases also pointed to a fragile configuration mechanism, which you might want to improve. By keeping track of how often the tests failed during precommit testing, you have data to identify how much effort you can justify spending on architectural approaches to minimizing the risk of configuration errors.
Testing the Trivial
In some cases, unit testing what you need to test can be difficult because of lack of framework support. While I was working on a VXML application, the team did not have access to a good mechanism to write