Testing Computer Software, 2nd Edition
Pragmatic and real-world oriented, this book teaches numerous shortcuts and tricks of the trade, including ways to reduce product risk and overall test costs by efficient ordering of test tasks. It also details how to test effectively when the developer gives few or inaccurate specifications, how to work with constantly changing designs and schedules, and how to conduct revealing tests without time-consuming source-code analysis.
Testers will appreciate the advice on effective bug reporting and tracking, black box testing, printer compatibility tests, and software product liability. A unique feature that testers and developers will appreciate is the appendix of more than 400 common software errors.
Review By: Beth Anderson
07/08/2010This book covers a wide variety of topics related to software testing. The book is broken into three sections: Fundamentals, Specific Testing Skills, and Managing Testing Projects and Groups. The first section is aimed primarily at testers new to the industry. It starts with a simple example, and then provides an introduction to testing. The remaining chapters in this section cover the different test types, software error types, and tips on analyzing and reporting bugs. The always-difficult subject of terminology is also addressed. The second section, Specific Testing Skills, shifts the focus to a more experienced tester and gets into “how-to” practical advice on tools, test planning, and some specific types of testing. Included are compatibility testing of printers, localization (foreign language) support, and user manual testing. The focus shifts again in the third section to a management perspective. Testing is defined for each phase in some commonly accepted software-development-lifecycle models. Included are testing activities performed by a developer or other participant rather than by the tester. Here topics are discussed such as quality assurance vs. testing and how and when to implement a test team. Alternative approaches to the test team in an organization are explored. Also included is a chapter discussing legal issues surrounding responsibility of the test group in final software quality. An appendix of common types of errors to look for and an extensive bibliography complete the book.
Overall this is a very good book that would be a nice addition to a software testing professional’s library. I did have some concerns, primarily that the book is dated. While this second edition was published in 1999, some examples and references have not been updated. Most obvious are several examples referencing MS-DOS, and the lack of references to the automated testing tools of today. The only references to automated tools discuss record/playback rather than the modular, data-driven approaches in use now. The bibliography also appears not to have been updated much in the second edition. Several excellent testing books of recent years are excluded.
There are specific details I didn’t agree with, such as the statement that the integrity test “should be conducted by one person, not by a team.” No explanation is given as to why they feel this way and I have seen teams successfully perform their definition of an integrity test. In spite of these concerns, it is a very good book, offering a broad coverage of testing topics. Most are not explored in any great detail, but the reader is referred to a related book. The chapters on “Reporting and Analyzing Bugs” and “Problem Tracking” are excellent, providing guidance in how to best document a bug and touching on the politics that go along with reporting bugs in someone else’s work. There is also good, practical advice on test-case design, user-manual testing, and test planning. I would recommend this book noting that, like most others, you need to determine which parts are of most use to you.
Editor’s note: The reviewer rated this book with three stars. Prior to this review, other sources rated the book for us at five stars. All sources highly recommend the book. We invite your comments.
Review By: Cathy Bell
07/08/2010You can sit down and read many books that guide you through the processes necessary to produce a quality software product. But most of us work on projects that may be well intentioned in the beginning and are following any one of a myriad of software quality processes, but as the deadline gets ever closer, the processes are bent and often discarded in favor of releasing the product on time. How many of us are putting the finishing touches on requirements, test plans, and user manuals after the product’s release?
“The quality of a great product lies in the hands of the individuals designing, programming, testing, and documenting it, each of whom counts. Standards, specifications, committees, and change controls will not assure quality, nor do software houses rely on them to play that role. It is the commitment of the individuals to excellence, their mastery of the tools of their crafts, and their ability to work together that makes the product, not the rules”(vii).
I’m sure we are all collectively nodding our heads in agreement with that statement. This book is all about mastering our craft, not necessarily following the rules.
The authors give us a brief overview of the content of the book, noting the way the book is structured and pointing out how the reader may get the most benefit from the book—as a novice tester, experienced tester, manager, or even as a teacher of this material. Each chapter starts with an explanation of why it was included and what is covered in the chapter. This introductory text also lists other “interesting readings” and refers to other chapters within the book that may further clarify material in the following chapter.
The book is broken down into three sections: Fundamentals, Specific Testing Skills, and Managing Testing Projects and Skills. The basics are covered in chapters 1–5 starting with a simple example: the program takes two numbers as input from the user, displays those numbers on the screen, adds the numbers, and displays their sum. The book shows that there are 39,601 possible number combinations that could be tested, and this does not include entering alpha characters, special characters, or function keys. The novice tester may be overwhelmed by this statistic, but the book shows how to determine what tests to conduct while it explains the testing terms. This general format is followed throughout the book, presenting the reader with a testing problem then explaining the best solution for the problem while defining relevant terms.
But the greatest value in this book is the practical guidance that is the most honest testing insight I have read to date. The advice not only covers the testing processes but also gives insight into the politically correct way to handle issues with developers and management alike.
So are we verifying that the program works or that the program doesn’t work? “You will do your best work if you think of your task as proving the program is no good. You are well advised to adopt a thoroughly destructive attitude toward the program. You should want it to fail, you should expect it to fail, and you should concentrate on finding test cases that show its failures.” Have you ever read that admonishment in another testing book? “The best tester isn’t the one who finds the most bugs or who embarrasses the most programmers. The best tester is the one who gets the most bugs fixed.”
Most programmers are not thinking of what will happen if the user performs functions outside the boundaries of the program’s parameters; that is our job as testers. This issue is one that is not covered in other testing publications because it deals with the human factors—our relationship as testers with the programmers whose code we test. While discussing bug reports and analysis in chapter 5, the authors admonish us, that our job is to tell programmers that what they did was wrong, something most people do not take well. But even if our opinion is that the programmer is “sloppy, stupid, or unprofessional,” we have to curb the urge to say so, out loud or in a written report, as we will diminish our own ability to have our reports taken seriously. This advice alone is worth the cost of the book, if you can get a new tester (or any tester) to see the value in this advice.
Chapter 7 covers test case design and asks the reader (not just students) to select a commercially available program and write test cases for five data-entry
User Comments
This book is very detailed and very good if you want to learn about testing
Beth criticizes this book for being out of date, even though it was published in 1999. But actually it was published in 1993. I have a copy of the 2nd Edition with this date, published by Thomson. The publisher changed to Wiley in 1999, but the text was not updated. Clearly it could be.
This is the first book I read after entering this profession and I strongly recommend it to all beginners. The first few chapters, where the author talks about fundamentals of testing are quite interesting. The later chapters introduce testing process, methods, tools and management in a clear and concise manner. Till date, I browse through the Reference and Index section of the book for a quick help. This book is a must in the library of every IT concern, which cares about quality.
The second edition of Testing Computer Software was published in 1993, not 1999. The 1999 date is a reprint date. I am glad that the book is still helpful, but several of us have been working on an update to the book for years. Much of it has stood the test of time, but other parts are getting elderly. We are continuing to work on an update (in two volumes, one on testing, the other on test management), but in the meantime, you might supplement the book with my commercial course notes, and James Bach's course notes, at www.testingeducation.org. That site is relatively lean today, a few sets of course notes is it, but we have a fair bit of additional material in the works. Coming back to TCS 2/e itself, I think the greatest strength of the book is that it is rooted in genuine experience. We wrote about stuff that had worked for us, failed for us, or was complicated for us -- and that we had a broader understanding of based on reading and discussions with other testers. Too many other writers describe things they have never done, and authoritatively prescribe things that will rarely work or that are much more complex than appears in the description. There are several opportunities for improvement in the book; I'll mention a few: (1) the book talks about test documentation but is far too gentle in its treatment of the documentation "standards" like IEEE 829. I had serious reservations about 829 back in 1983, when I started writing the first edition of TCS, and by 1993, I thought it was clear that this standard was doing more harm than good in commercial markets. But we didn't have feel confident that we had a good way to explain our reservations. Pettichord, Bach and I tackled this much better in Lessons Learned in Software Testing. (2) The discussion of bug reporting was good as far as it went, but my course notes extend the idea of bug advocacy quite a bit. (3) The chapter on test design is too narrow. We mentioned some other testing strategies, especially exploratory testing, but we spent most of our time on domain testing (boundary analysis, equivalence analysis,etc.), even illustrating it with a full chapter applying it to printer compatibility testing. We had used several other approaches, but didn't know how to articulate them well enough for our descriptions to be useful to other testers. We were also concerned that we would not appropriately credit other senior folks for their work--a lot of people were using these methods, and we were learning from them, but not much of their work was published. Today, I think that a good discussion of black box testing should cover the following techniques in detail: domain testing, function testing, scenario testing, specification-based testing, risk-based testing, exploratory testing, user testing, regression testing (manual and automated), state-based testing, multi-variable combination testing, and high-volume random testing. (4) The book doesn't address metrics well, largely because we didn't have good constructive advice. You might find my draft chapter (for TCS 3) helpful, Measuring the Extent of Testing, at http://www.kaner.com/pdfs/pnsqc00.pdf. (5) The bug appendix is getting old. Two of my students, Giri Vijayaraghavan and Ajay Jha have been developing a next generation categorization of bugs and examples. Giri's work won a Best Paper award at Quality Week this year and will go up on Testingeducation.org soon. (His paper will be replaced by his M.Sc. thesis in a few months.) Ajay's thesis will take a bit longer. /new paragraph/ I mentioned that several of us (Bach, Nguyen, McGee, Jorgenson, Falk, me) are working on the third edition of TCS. I can't tell you when it will be ready. We'll release it when we've got something that we're proud of, and that covers the key issues that we think are important today. It's been a very challenging task, probably the most difficult project that I've worked on, and it will take some more time before it's done.