confident this story was done?” This very question implies tests that check for intended functionality, or what my boss calls “Happy Path” testing.
Beck goes on to say that XP teams should have a dedicated tester who “uses the customer-inspired tests as the starting point for variations that are likely to break the software.” This implies that the tester SHOULD guide the customer in defining tests that will really stress the application. He also mentions “stress” and “monkey” tests designed to zero in on unpredictable results.
In practice, when I have neglected to negotiate quality with a customer, acceptance testing became as treacherous as the mud pit which currently surrounds the new wing of my house. I wrote and performed acceptance tests according to my own standard of quality. Naturally, the tests, particularly the load tests and “monkey” tests, uncovered issues. To the XP-naive customer, these just look like bugs, and they’re upsetting. The customer starts to worry that his stories aren’t really being completed.
The XP way to deal with any kind of issues or defects is to turn them into stories, estimate them, and let the customer choose them for subsequent iterations. We know we’ll always have some defects and unexpected issues crop up. However, to minimize the pain of dealing with these, it’s best to set the criteria for quality at the start of each iteration.
Set the Quality Criteria
As the XP tester, ask lots of probing questions during the planning game. If the customer says “I want a security model so that members of different groups have access to different feature sets”, ask: “Do you want error handling? Can the same user be logged in multiple times? How many concurrent logins should the system support?” This may lead to multiple stories, which will make estimation much easier.
Our customers have rarely thought of things like throughput capacity and stability up front—rather, they assume that their intentions are obvious: “Well, of COURSE I want to have more than one user log in at a time”. The tester should turn assumptions into questions and answers. This way you don’t end up with doorless rooms.
Write acceptance tests which prove not only the intended functionality, but the desired level of quality. Discuss issues such as these with the customer:
- What happens if the end user tries a totally bizarre path through the system?
- What are ways someone might try to hack past the security?
- What are the load and performance criteria?
As a result of these discussions, you may need to get the team back together to see if stories need to be split up or new stories written, and re-estimate stories to reflect the quality standards the customer has set in the acceptance tests. The customer will have to drop a story or change the mix, but they will be happier with the end result. Higher external quality means more time and/or more cost! Both a VW Beetle and a Hummer will get you to the grocery store, but if you need to cross the Kuwaiti desert, you’re going to have to pay for the vehicle that’s designed for the job.
Participate in developers’ task assignment and estimation sessions. Testers often have more experience dealing with customers and a better understanding of what the customer meant to request. If the story is for a screen to add a record to the database, it’s likely that the customer also meant they wanted to be able to read, update and delete records as well. Get everyone back together if there have been assumptions or a disconnect in understanding. Testers
|Is Quality Negotiable?||1.22 MB|