they often duplicate a lot of routing user activities, like logging in or opening a file. They often execute lengthy scripts that take the same actions a user would take at the keyboard and mouse.
Automated acceptance tests are often written by different people with different skills, such as technical testers. They often write the tests at a different time than the code is written--often long later, after the code is done.
Running these tests isn't quick either. In many organizations these tests take hours to run, so they're often only run at night or at fixed times during the day. And they're often run on code after it's checked into the code base, so they often don't prevent bugs from getting into the code, just point out that they're there.
Failed Acceptance Tests Often Deliver News Too Late and to the Wrong Person
For all these reasons, we don't get that happy feeling the developer gets when he sees a unit test fail. When an acceptance test fails, it's usually a long time after the offending code has been checked in. In fact, a lot of code may have been checked in. This makes finding the offending code difficult. Also, it's not always clear who should be finding and fixing the issue. It's not the person who wrote the test, if his is a role that writes tests and not code. It's not clear which developer should fix the code, and, even if it was, that developer probably already has moved on to doing something else, so now it's an interruption.
I see many organizations struggling to keep their acceptance-test code bases working. It's common to have many tests failing every day. When some tests are fixed during the day, more things are broken the next day. I've seem many teams just give up.
Acceptance Tests Can't Confirm That We'd Accept the Software
There's a simple reason why acceptance tests break more often than unit tests. Just like unit tests, they only verify what we understood when we wrote the tests. And herein lies the problem.
In my last column ("An Uncomfortable Truth about Agile Testing") , I invoked the old definitions of verification and validation, where verification confirms the product was built as specified and validation confirms the product is fit for use or delivers a desired benefit to its user. This last bit, validation, is often subjective. The decision of whether it's delivering benefit isn't something that can be asserted in an automated test; you need to see it and use it.
But that's not all.
For much of software--especially commercial software--it needs to be easily learnable, efficient to use, and aesthetically pleasing for it to deliver its desired benefit. Even if it's not commercial software--for example, something like your company's handmade, internal, time-tracking system--you'd still like it to have these qualities. But again, these are qualities that can't be tested by automated tests.
The corner I continue to see teams paint themselves into is one in which the team tries to automate acceptance tests before the software has been validated.
Acceptance Tests Written Early Break When You Do the Right Thing
A common pattern I see is acceptance tests--particularly those running the application through the user interface--being written along with the code. They verify that the code was written as specified. Everything's checked in, and then some time later the system is shown to end users or business stakeholders. As you'd expect, they see opportunity for change and for improvement. The change may be to move a couple fields around the screen, to relocate links or