"How to Break Software" is a departure from conventional testing in which testers prepare a written test plan and then use it as a script when testing the software. The testing techniques in this book are as flexible as conventional testing is rigid. And flexibility is needed in software projects in which requirements can change, bugs can become features and schedule pressures often force plans to be reassessed. Software testing is not such an exact science that one can determine what to test in advance and then execute the plan and be done with it. Instead of a plan, intelligence, insight, experience and a "nose for where the bugs are hiding" should guide testers.
This book helps testers develop this insight. The techniques presented in this book not only allow testers to go off-script, they encourage them to do so. Don't blindly follow a document that may be out of date and that was written before the product was even testable. Instead, use your head! Open your eyes! Think a little, test a little and then think a little more. This book does teach planning, but in an "on- the-fly while you are testing" way. It also encourages automation with many repetitive and complex tasks that require good tools (one such tool is shipped with this book on the companion CD). However, tools are never used as a replacement for intelligence. Testers do the thinking and use tools to collect data and help them explore applications more efficiently and effectively. Practical tutorial on how to actually do testing by presenting numerous "attacks" you can perform to test your software for bugs.
Practical approach has little or no theory, but shows real ways effectively test software—accessible to beginners and seasoned testers.
The author is well known and respected as an industry consultant and speaker.
Uses market leading, and immediately identifiable, software applications as examples to show bugs and techniques.
Review By: Karen Johnson 07/09/2010This is a tester’s book. Pure testing—no theories or academic philosophies. This book is designed for testers whose hands are on the keyboard and who are hungry for testing ideas. The book is geared to client-server or Web testing in the Windows environment. Some concepts may be applicable to other environments.
The book is laid out in a series of testing tactics presented as attacks that can be launched against an application. The attacks range from clever (diminished network connectivity versus complete outages) to classic (boundary conditions) to offbeat (recursive functions).
The book offers a companion CD with a tool called canned HEAT. The tool provides a ready means of simulating attacks by selecting conditions and being able to use the tool with your own software under test. What’s disappointing is that HEAT only works with Win2000.
A series of appendices round out the shorter book very well. A couple of appendices are devoted to explaining HEAT and another tool included on the CD. The glossary is lean. Another appendix is an article reprint entitled "What Is Software Testing? And Why Is It so Hard?" It’s a good question and an interesting in-depth discussion.
This book is written for the intermediate tester. Novice testers might not appreciate the tactics or understand the depth of the attacks presented. At times, there is a fair amount of assumed knowledge, such as discussions of API calls, with little introduction or clarification. Advanced testers may struggle to glean new tips from the book. But an intermediate hands-on tester will be able to follow along technically and likely find themselves smiling with new ideas to apply immediately at work.
The summary sections are well written and for a testing manager, they may be all that is necessary to read. Review the attack headings to tap into the concepts presented and decide what is applicable.
The book is laid out in a series of testing tactics presented as attacks that can be launched against an application. The attacks are laid out in a consistent pattern of what, when, and how. I would have liked a "why" section added to each attack that could give a clear purpose of the end goal of the attack. And the "why" sections could offer insight into the risk and the likelihood of the attack.
On one hand, using the premise of attacks works well since testers often perform best when a sense of competitiveness is in the air. On the other hand, a less experienced tester may focus on the negative aspect of attacking someone else’s work. The book would do well to remind testers that when they are ready to discuss bugs with a developer, in most companies and with most developers, it's best for testers to keep their bug-hunting glee to themselves and stick to the facts.
The list of attacks creates a terrific reference book. Suppose a handful of the attacks don't apply to the software you are testing at your present job. After a change of jobs or change of projects, some material may now perfectly suit your needs. This will be a testing book to return to.
Review By: Steve Splaine 07/09/2010James Whittaker is one of the most engaging speakers I've listened to at software testing conferences; this is in part due to the rich experiences he has encountered in his long tenure in the software development domain. Drawing from this experience, James has distilled the essence of his lively and effective presentations into a concise and easy-to-read book, "How to Break Your Software."
This book summarizes a series of generic attacks (otherwise known as software testing techniques) that can be applied to virtually any software application. MS Office is frequently used to illustrate these techniques, which might give the false impression that the book is only aimed at GUI applications running on a Windows platform.
Chapters 2 and 3 describe seventeen ways to manipulate input data with a specific test objective in mind (i.e. forcing an internal data structure to store too many or too few values). Chapter 4 lists six attacks that create an unpleasant environment for the application to run in (i.e. varying file access permissions, filling-up free space on a hard drive, or starving an application of CPU usage).
Whether you classify these tests as negative, non-functional, robustness, or in another category, they all have the potential to break your software. Thus, they are all worthy of consideration. In summary, James has done an excellent job describing a solid collection of techniques for unit-testing software.