"Warning: The fairy tale you are about to read is a fib--but it’s short, and the moral is true. Once upon a product cycle, there were four testers who set out on a quest to test software." Read this article for the whole Intelligent Test Automation story.
Once upon a product cycle, there were four testers who set out on a quest to test software.
Tester 1 started hands-on testing immediately, and found some nice bugs. The development team happily fixed these bugs, and gave Tester 1 a fresh version of the software to test. More testing, more bugs, more fixes.
Tester 1 felt productive, and was happy-at least for a while.
After several rounds of this find-and-fix cycle, he became bored and bleary-eyed from running virtually the same tests over and over again by hand. When Tester 1 finally ran out of enthusiasm-and then out of patience-the software was declared "ready to ship."
Customers found it too buggy and bought the competitor's product.
Tester 2 started testing by hand, but soon decided it made more sense to create test scripts that would perform the keystrokes automatically. After carefully figuring out tests that would exercise useful parts of the software, Tester 2 recorded the actions in scripts. These scripts soon numbered in the hundreds. At the push of a button, the scripts would spring to life and run the software through its paces.
Tester 2 felt clever, and was happy-at least for a while.
The scripts required a lot of maintenance when the software changed. He spent weeks arguing with developers to stop changing the software because it broke the automated tests. Eventually, the scripts required so much maintenance that there was little time left to do testing.
When the software was released, customers found lots of bugs that the scripts didn't cover. They stopped buying the product and decided to wait for version 2.0.
Tester 3 didn't want to maintain hundreds of automated test scripts. She wrote a test program that went around randomly clicking and pushing buttons in the application. This "random" test program was hypnotic to watch, and it found a lot of crashing bugs.
Tester 3 enjoyed uncovering such dramatic defects, and was happy-at least for a while.
Since the random test program could only find bugs that crashed the application, Tester 3 still had to do a lot of hands-on testing, getting bored and bleary-eyed in the process. Customers found so many functional bugs in the software when it was released that they lost trust in the company and stopped buying its software.
Tester 4 began with hands-on, exploratory testing to become familiar with the application-and used the knowledge gained during the hands-on testing to create a very simple behavioral model of the application. Tester 4 then used a test program to test the application's behavior against what the model predicted. The behavioral model was much simpler than the application under test, so it was easy to create. Since the test program knew what the application was supposed to do, it could detect when the application was doing the wrong thing.
As the product cycle progressed, developers wrote new features for the application. Tester 4 quickly updated the model, and the tests continued running. The program ran day and night, constantly generating new test sequences. Tester 4 was able to run the tests on a dozen machines at once and get several days of testing done in a single night.
After several rounds of testing and bug fixes, Tester 4's test generator began to find fewer bugs. Tester 4 upgraded the model to test for additional behaviors and continued testing. Tester 4 also did some hands-on testing and static automation for those parts of the application which were not yet worth modeling.
When Tester 4's software was released, there were very few bugs to be found. The customers were happy. The stockholders