is being accomplished is different, although that might be beyond the scope of this article.
Because the feedback to the developer is quick, there are little metrics to gather as defects can be fixed immediately. Metrics such as velocity of the team is the responsibility of the ScrumMaster or Iteration Manager. This means that the QA Analyst should have little to do in the way of metrics collection.
Janet I see a huge issue here. Testers can only be directly connected to customers if those customers are accessible. In a number of companies they do not have the access you are assuming. The Scrum teams I have worked with rely on the Product Owner to relay the customers/end users/ purchasers needs and meaning. Also as you have described above, the role of the person who collaborates with the Product Owner helps define acceptance tests and then continues to be the one to run the acceptance tests and ad hoc testing. I think this is problematic. The scenario I think of, is if there is only one person to do the testing after coding has been done, then who works on the acceptance testing and the stories for the next sprint? That person gets behind and slows does the velocity of the team as a whole. In my experience the sprint planning for one sprint happens during the previous sprint. This way the team has their sprint backlog Just In Time (JIT). That is when their next sprint starts. There seems to be a lot of different experiences we have been through, that may be the reason we are seeing the difference. You see testers "testing up front" and I see the roles of Tester and QA Analyst as separate but just as important.
The Product Owner can do acceptance testing; however, in most cases they are unwilling or unable to do a good job at this due to time constraints. Product Owners rely on business analysts, quality assurance analysts, and testers to represent their interests. I believe you are assuming too much time is available for exploratory testing. On the teams I have worked on we are only trying to prove to the product owner and stakeholders that the software that was promised was delivered to the specified level. If that means that the customer asked for a website with a button, we don’t spend time ad hoc testing all the possible issues with that button and the website. We simply prove to the customer that they got what they asked for in the requirements gathering. Now if they want something more, like an enhancement we ask for an iteration to get it done and delivered to them. But they have to wait.
We have a terminology issue here. I mean the customer in a generic way, as in anyone representing the customer. In Scrum, that is the Product Owner. I can accept that quality assurance and testing activities are different, but do not see them needing to be separate people. If one person does the pre-planning as you described, and then carries through the iteration planning and the testing on the same story, she will have a much better understanding of what the issues might be, and a clearer picture of the whole. I have seen this very successful in many teams, and the testers do not get behind if the team is working together. You are commenting on your experience in your context, so I’ll not challenge you on your perceptions. However, here’s something for you to think about. Instead of dividing up the activities between