While anyone who has automated her testing knows you can't create repeatable automated tests from unstable data, it did not dawn on this week's columnist--self-proclaimed automation lobbyist Linda Hayes--that this issue cripples manual testing as well. Read on to share her epiphany.
As an automation lobbyist, I constantly whine about test data--or the lack thereof. It's basically impossible to develop repeatable automated tests without a known, stable data state. For companies that are transitioning from manual to automated testing, realizing this is like stepping into an ice cold shower: it wakes you up in an unpleasant sort of way.
Don't get me wrong, I know it's a huge problem. You can't just go around archiving and refreshing monster databases, and even if you could, there are related files and interfaces between applications that make it even harder. It didn't dawn on me until recently that the real problem has nothing to do with automation at all.
Here's the scenario: I was working with a company who was evaluating test automation. As part of the assessment, one of their QA managers walked me through a test case manually. The test was to issue a loan against a 401(k) plan. First she had to find a plan that permitted loans, as well as a participant within that plan who had a sufficient cash balance for the loan, had not taken out a loan within the past year, and did not have an outstanding loan from a previous year.
This took her about half an hour. Once she found the right plan and participant, it took about ten minutes to issue the loan and confirm it was accepted. Granted, she was explaining things to me as she went along, so without my involvement the whole process would have been faster--but the ratio between locating the data and executing the test case would have been the same.
Next it was time to automate the test case, but as soon as we started she pointed out that we could not use the same participant because it now had a loan outstanding and no longer qualified. So, the whole process had to be repeated.
At this point I concluded that automation was impossible, because we would have to essentially write an artificial intelligence system that knew everything she did in order to find valid accounts. Their environment was not stable enough to either reproduce the same data or even let us add our own data, since it was shared by others and updated constantly.
After discussing the implications with her and her manager, we agreed that automation was not an option unless the data environment was brought under control. This would require a substantial investment in terms of hardware, software, and time. I encouraged management to make a business case by pointing out all the benefits that automation would bring. They agreed to run it up the flagpole but made no promises.
It wasn't until some time later, when I was reflecting on this issue for another account, that it suddenly struck me: automation has nothing to do with it!
Think about it: I watched her spend three-quarters of her time for a manual test just dealing with the data and one-quarter running the test. And she was lucky she could even find an account with all of the conditions necessary; no doubt in some cases a test could not be run simply because the data did not exist. This is especially true for test cases that are specific as to time, for example the posting of interest or dividends that occurs only on a particular time schedule.
So whether she ever automated her testing or not, just providing data stability would improve her manual testing productivity by a factor of 4X! That's huge.
What I should have been telling management is that they needed to get control