Many teams think they are agile in their projects, but if you're not receiving and analyzing feedback regularly, you're not really agile. Plotting the feedback you get on a chart throughout your sprints can help you see whether you have a lag. Read on to learn how to gather and use your feedback to be truly agile.
One of my previous bosses used to say, “A good consultant will always sense weakness in a system in just a few quick interactions. All that’s needed is a view of a few key indicators and an ability to correlate.” I have always been fascinated by the usage of metrics to articulate information, and I believe they are an exceedingly a powerful tool.
When data is organized in a form that anyone can understand, it can lead to wise decisions. It acts as a great instrument for showcasing reality.
In traditional project management, the team monitors all product issues in defect trackers. This practice makes it easy to analyze how the product has been performing. But in many agile teams, team members add defects reported from the field into the product backlog. This hinders quick analysis.
My company was working on an agile setup for more than five years to develop a financial product to help corporate to deal with its taxes. The team had a certified ScrumMaster, and there was a dedicated product owner to guide the product vision and act as a liaison between users and the development team. All team members had at least eight years of development experience, and there were also a few dedicated testers. From a compatibility angle, it was a highly capable team.
They were doing regular builds, having stand-ups daily, and using an application lifecycle management tool for backlog and task planning—all habits of a traditional agile software method. I was invited to study them and improve their agility in a project initiative that was split into two releases with six sprints each. The product was used by the customer once every six months during the tax returns cycle.
In every sprint, defects were noted by testers and either fixed in the same sprint or spilled over to the next sprint. Teams presented demos to the product owner, but once the product owner and a few others finally got to work on the product after the first release, the defect count rose. I had an hourlong discussion with key team members as a group and plotted the graph below about their defects.
As you can see, though the team works in two-week sprints, real feedback eluded them to the extent of ten weeks. The story continues for the next set of sprints and the next release as well. Eventually the team got valuable feedback during user acceptance testing, but that was just before the product was used across the community. The smiley faces show the declining morale of the team.
Our discussion revealed a lot of room for improvement.
The main issue was that the team thought it was following the agile methodology, but they weren’t being truly agile. They were deprived of the regular feedback developers need for their work. This was the top cause for lower morale and customer unhappiness. The team did a workshop to identify how they could improve their definition of ready to get specifics before starting a sprint.
They also had to revisit the role of testers in the team. There was a significant disconnect with reality. To fix the feedback discrepancy, testers started employing user personas to represent different sets of habits and needs.
Lastly, we looked at the capability of the product owner role. This engagement had a single product owner who was expected to know all usage scenarios to give feedback in demos, which was unrealistic. We formed a user council who could assist the product owner in filling the gap between expectations and abilities.
When we drew the defect plot, we thought the definition of done was not based on the criteria of the end-to-end working product. Confirming the assumption, discussions brought out that the definition of done was limited to completion of work in this product line, but issues emanated as the product line got integrated with interfacing systems during the release cycle. Not all related teams were in cadence. The need for unity across teams and having an end-to-end working product across product lines became loud and clear to the management.
This also showed that there were significant silos present across the various disciplines. Everyone seemingly did their jobs, but collectively it did not work. This led to mapping testers to specific modules and developers, and making workplace processes more visible to everybody.
In an ideal agile environment driven by feedback, defects would constantly be raised and fixed in the same sprint or in subsequent sprints. If there is a spurt in defects at multiple points, it means real feedback is being missed, and that is what adversely impacts agility the most.
There are two kinds of problems in any system: sporadic and chronic. Agility helps the team respond well to sporadic problems, but chronic problems can become accepted as normal behavior. Addressing the feedback process made this team’s chronic underlying problems visible and brought them together to drive positive change.