Interface or Interfere?


This meant that they had to cull out the defects that weren't software issues at all. The process was extremely time-consuming, because they ran lengthy test suites and a test data problem could cause literally hundreds of failures. And even when the problem was a defect, it might also cause multiple failures and therefore create duplicate issues. This adding and closing of defects tainted their metrics by inflating the defect arrival and close rates, thereby invalidating the classic S-curve report used to predict release.

Also, by the time they did their research and reached their conclusions, they had more information and analysis to offer than was available in the test log itself. They still had to retrieve the issue within the defect management system and add the additional information, so there was no real time savings.

Finally, he said, in an automated test, the script or step that failed was not necessarily the actual root of the problem. Often the genesis of the issue occurred earlier than when the failure was logged, so the information available in the test log was not germane. All in all, he decided the integration was more trouble than it was worth.

So I'm on a mission to find out whether these experiences are anomalies or the rule, or perhaps if there is a way to approach integration that makes it productive—or not. What have you found that works or doesn't?

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.