Risk-based testing has become an important part of the tester’s strategy in balancing the scope of testing against the time available. Although risk-based methods have always been helpful in prioritizing testing, it is vital to remember that we can be fooled in our risk analysis. Risk, by its very nature, contains a degree of uncertainty. We estimate the probability of a risk, but what is the probability that we are accurate in our estimate? Randall Rice describes twelve ways that risk assessment and risk-based methods may fail.
STAREAST 2007 - Software Testing Conference
Test automation is the perennial "hot topic" for many test managers. The promises of automation are many; however, many test automation initiatives fail to achieve those promises. Shrini Kulkarni explores ten classic reasons why test automation fails. Starting with Number Ten ... having no clear objectives. Often people set off down different, uncoordinated paths. With no objectives, there is no defined direction. At Number Nine ... expecting immediate payback.
A trap is an unidentified problem that limits or obstructs us in some way. We don't intentionally fall into traps, but our behavioral tendencies aim us toward them. For example, have you ever found a great bug and celebrated only to have one of your fellow testers find a bigger bug just one more keystroke away? A tendency to celebrate too soon can make you nearsighted. Have you ever been confused about a behavior you saw during a test and shrugged it off?
You've committed to an agile process that encourages test driven development. That decision has fostered a concerted effort to actively unit test your code. But, you may be wondering about the effectiveness of those tests. Experience shows that while the collective confidence of the development team is increased, defects still manage to raise their ugly heads. Are your tests really covering the code adequately or are big chunks remaining untested? And, are those areas that report coverage really covered with robust tests?
More then one-third of all testing time is spent verifying test results-determining if the actual result matches the expected result within some pre-determined tolerance. Sometimes actual test results are simple-a value displayed on a screen. Other results are more complex-a database that has been properly updated, a state change within the application, or an electrical signal sent to an external device. Dani Almog suggests a different approach to results verification: separating the design of verification from the design of the tests.
Preventing defects has been our goal for years, but the changing technology landscape-architectures, languages, operating systems, data bases, Web standards, software releases, service packs, and patches-makes perfection impossible to reach. The Pareto Principle, which states that for many phenomena 80% of the consequences stem from 20% of the causes, often applies to defects in software.
The fundamental promise of Service Oriented Architectures (SOA) and Web services demands consistent and reliable interoperability. Despite this promise, existing Web services standards and emerging specifications present an array of challenges for developers and testers alike. Because these standards and specifications often permit multiple acceptable implementation alternatives or usage options, interoperability issues often result.
You've wanted this promotion to QA/Test manager for so long and now, finally, it's yours. But, you have a terrible sinking feeling ... "What have I gotten myself into?" "How will I do this?" You have read about Six Sigma and developer to tester ratio-but what does this mean to you? Should you use black-box or white-box testing? Is there a gray box testing? Your manager is mumbling about offshore outsourcing. Join Brett Masek as he explains what you need to know to become the best possible test manager.