Danny Faught recommends Testing Applications on the Web: Test Planning for Internet-Based Systems by Hung Q. Nguyen. Faught concludes: "This book does not attempt to be a general reference on software testing. What it provides, instead, is domain-specific information that helps the reader plan for testing a Web-based application. Its clear illustrations of important Web testing approaches and its extensive checklists give testers detailed suggestions for their testing, based on real Web development experiences."
Show-stopping failures in Web applications are all too common. One serious but easily avoidable failure is the "dead-end" bug, where a user is left staring at a blank screen without any clue about what went wrong. Derek Sisson describes different types of "dead-end" bugs and shows how to avoid them.
This is a no-holds-barred discussion of common load testing errors and consequences. Load testing can and should be done long before a system has a stable or complete user interface. One reason that people often schedule load testing as a final step in a test or development plan is the confusion linking load testing with functional testing.
To be most effective in analyzing and reproducing errors in a Web environment, you need to have a command over the operating environment. You also need to understand how environment-specific variables may affect your ability to replicate errors. With the application of some of the skills covered in this article, your Web testing experience should be less frustrating and more enjoyable.
The Web has enabled pervasive global information sharing, commerce, and communications on a scale thought to be impossible only ten years ago. At the same time, the Web dealt a setback in the user interface experience of networked applications. Only now are Web standards and technologies emerging that can bring us back to the rich and robust user experiences that were developed in the desktop client/server era before the Web came along. Wayne Hom presents examples of great, rich client Web user interfaces and discusses the enabling tools, technologies, and methodologies for today’s popular Web 2.0 approaches. Wayne discusses the not-so-obvious pitfalls of the new technologies and concludes with a look at user interface opportunities beyond the current Web 2.0 state-of-the-art to see what may be possible in the future.
User experiences on the Web versus older technologies
The promises of faster, better, and cheaper testing through automation are rarely realized. Most test automation scripts simply repeat the same test steps every time. Join Ben Simo as he shares his answers to some thought-provoking questions: What if your automated tests were easier to create and maintain? What if your test automation could go where no manual tester had gone before? What if your test automation could actually create new tests? Ben says model-based testing can. With model-based testing, testers describe the behavior of the application under test and let computers generate and execute the tests. Instead of writing test cases, the tester can focus more on the application's behavior. A simple test generator then creates and executes tests based on the application's modeled behavior. When an application changes, the behavioral model is updated rather than manually changing all the test cases impacted by the change.
Test Automation has come a long way in the last twenty years. During that time many of today's most popular test execution automation tools have come into use, and a variety of implementation methods have been tried and tested. Many successful organizations began their automation effort with a data-driven approach and enhanced their efforts into what is now called keyword-driven test automation. Many versions of the keyword-driven test execution concept have been implemented. Some are difficult to distinguish from their data-driven predecessors. So what is keyword-driven test automation? Mark Fewster provides an objective analysis of keyword-driven test automation by examining the various implementations, the advantages and disadvantages of each, and the benefits and pitfalls of this automation concept.
Software that performs well is useless if it ultimately fails to meet user needs and requirements. Requirements errors are the number one cause of software project failures, yet many organizations continue to create requirements specifications that are unclear, ambiguous, and incomplete. What's the problem? All too often, requirements quality gets lost in translation between business people who think in words and software architects and engineers who prefer visual models. Joe Marasco discusses practical approaches for testing requirements to verify that they are as complete, accurate, and precise as possible-a process that requires new, collaborative approaches to requirements definition, communication, and validation.