Getting quality time from SMEs to define requirements and test cases is especially difficult when you move from manual to automated testing. Even when documented—which is unfortunately not the rule—manually executed tests can get away with statements like "Enter valid data and verify that results are correct." What makes the data valid? What makes the response correct? What response is valid for which data? The answer for a manual test is that the person performing the test decides, which is fine if they are application experts. For someone trying to convert a manual test into an automated one, this knowledge is not native. Trying to explain why you need explicit data values can be an exercise in frustration.
No doubt, much of this exasperation is due to the over-enthusiastic cost-cutting fever that swept the industry when the Internet bubble burst, which caused the offshore wave. The problem is that, while you can find programmers in other countries who know coding languages or you can apply brute force to testing with higher offshore headcounts, you can't find foreign experts in US insurance, healthcare, brokerage, and other arcane, regulated industries. Furthermore, this type of knowledge is not easily acquired or transferred. The best SMEs and business analysts gained their expertise through years of hands-on, front-line experience. As a result, SMEs are in high demand and short supply.
I have struggled to explain to management why this is so important, but now I have validity for my argument: a research report from Quantitative Software Management Inc. (See www.qsma.com for more information.)
This study was based on a sample of 563 IT software development projects completed between January 1, 2000 and December 31, 2004. The study was based on actual schedule and effort expended from business requirements definition through the initial delivery and stabilization phase. The typical project release was 30 thousand new or modified source lines of code or 600 function points and took place over thirteen and one-half months, consuming fifty-five people months of effort.The study found four key factors accounting for the differences between best and worst in class:
- Effective project leadership resulting in low staff turnover. The results are not surprising. But what I like about the report is that instead of the usual admonition about making sure the team members have the right technical skills, this study found an explicit need for functional knowledge of the application domain. In other words, what the application is supposed to do for the end-users, which translates to a need for SMEs.
- Another thing I like about this study is that it follows projects through the initial delivery all the way through the stabilization phase. How many times has a project been delivered "on time" because requirements were left off; were poorly understood or implemented; and then, shortly after the first release, the missing requirements were uncovered and corrected in a flurry of fixes?
- In fact, I suspect that the reason domain knowledge assumes more prominence in these findings than it has in others is exactly for that reason. It is not until the end-users have spent some time with the application that requirements deficiencies or subtle—but—critical functional gaps are uncovered.
- So what can you do? First, make all the noise you possibly can about getting SMEs involved throughout the process. Wave this study around. And don't fall for the trick that the schedule demands you should be in stage X or Y by now, so you should move forward even if you don't have the input, review, or sign-off from the SMEs because they are so busy elsewhere. You'll pay for it later, with interest. Finally, be sure to monitor the so-called stabilization phase and focus on clearly identifying the issues that result from inadequate SME involvement so you can make your case better the next time around.