Tell me if you've heard this before: "Oh, we're agile. We do standups and have a product backlog." I've heard this exact statement from an engineering manager and dozens of similar statements from others, all claiming agility and tossing out a couple of buzzwords.
In this example, the reality was that their standups were an hour long, the team had more than twenty people, and the backlog was just a filter on enhancement requests in their bug tracking system. Few agilists would argue that what this team was doing could be considered agile, even if it may have been an improvement over what they were doing before.
So you know the organization isn't really agile. That's the easy part. The challenge comes when you talk to such an organization. Even if you're a highly regarded consultant, you'll be hard-pressed to change their minds based on your opinion alone.
Enter the agile assessment. This tool allows you to estimate, judge the value of, or evaluate how teams or even organizations are doing in their agile journey.
And like anything useful, there is no shortage of assessment options available. How do you choose the right one to use for your organization?
An Agile Assessment Tool versus Model
The first thing to understand is the difference between a tool and a model or strategy. A Google search will uncover dozens of agile, Scrum, or lean assessment tools. While many of these tools are excellent, they aren’t useful unless you know what you should be assessing. If you hand me a hammer and don't give me plans for building something, then all I can do is pound in nails all day with no real results.
When I first used agile assessments, all I had was a poorly documented tool. Without a strategy for how to use it, I fumbled about a lot and was pretty much ineffective with it—not unlike collecting metrics and not doing anything with them.
So while tools get all the fanfare, it's the end-to-end assessment model that is critical. You may end up using more than one tool in your model, so by opening up to looking at the overall approach, you don't get locked into a single-tool mindset.
Acceptance Criteria for a Good Agile Assessment
Let's start with what success looks like. If we think of an assessment like a research study or even a survey, we can draw on four well-understood criteria to measure against.
- Retest reliability: You test a team in January and test them again in March. You know this team has not made any real changes. Does the assessment show the same results? If it does, that's good retest reliability.
- Inter-rate reliability: You go in and review a team and give them an A+. Your colleague goes in and reviews the same team and gives them a D–. This would be poor inter-rate reliability. Using the same assessment for the same team, you should not get two such wildly different results based on the person administering the assessment.
- Internal validity: It’s clear to you from just looking at the team that they got remarkably better. Does the assessment reflect this? That's internal validity—knowing that the program (the agile transformation, in this case) is having the effect you desired.
- External validity: Is your test so specific that it will only work with this company or department and would be totally inapplicable anywhere else? That would be poor external validity. If your assessment can apply to multiple places and applications, then it is likely a much better tool than something completely customized.
What to Measure with the Agile Assessment
Once you have the tools to ensure you are building a good model, you need to think about what you are going to measure. This is the hardest place to offer guidance, because what you need to measure can be very different depending on the organization, type of agile program, where the teams are in their journey, and so on.
Generally, though, there are some key areas you want to make sure you are looking at. Here is what I personally recommend.
- Ability to define the product: One of the larger threats to agile success is a lack of clarity in the backlog. If a team can’t work together to define what gets built, they will never get to how to build it.
- Ability to plan effectively: Now that you know what you want to build, do you know how you will go about it? Do you communicate well in the scrum events? Are your plans clear? Are your information radiators visible?
- Ability to deliver a working, tested product: At its core, this point is about the ability to get to “done.” As teams evolve it becomes more about technical practices like pairing, test-driven development, and continuous integration.
- Ability to continually improve: If a team is not able to learn and take actions from their past sprints, they will never truly improve.
Choosing an Assessment Strategy Model
There is no silver bullet. Just keep reminding yourself of this, and you'll be so much more successful at any agile implementation that you do.
In the case of an agile assessment, there isn't a single tool or system that will get you to success. Instead, it is a series of interlocking events and tools that will give you the ability to best assess an organization or team. Just as you need a strategy for approaching an agile transformation, you also need a strategy for how you will approach an agile assessment.
The following steps compose a framework I've found to be successful.
1. Conduct observation interviews: By asking the same questions to all your stakeholders, you create a view of how the organization is currently operating and where the clear areas of improvement are. Doing these observation interviews has helped me to better tailor the next stages of the assessment model to fit with the organizations I am helping.
2. Get a quick read on agility: I started doing a quick test last year to gauge teams’ agility, and it made a huge difference in the effectiveness of my assessment strategy. While having a facilitator ask the questions is ideal, the quick test can be done by teams on their own.
Come up with roughly ten questions that touch on the basic practices for agile, such as how big the team is, whether team members are collocated, and how often they release. The answers should be able to be either yes or no or strict data (e.g., How many people are on your team?). The output will be a score that allows a basic assessment of the agility of the team or organization.
By starting with the quick test, you can quickly prioritize teams to work with in more detail. A team that scored well will probably be farther down the list for a facilitated assessment.
3. Assess the team: The majority of assessment tools are some kind of facilitated questionnaire for individuals or the team. It’s important to remember three key points here.
First, take some time to observe the team in action. You can be effective only if you first create a relationship of trust. If you just walk in on day one and start asking questions, you’re likely to get the same accuracy as throwing darts to pick your lottery numbers. Observing the team first allows you to come up with better questions, get better answers, and temper those answers with your firsthand knowledge.
Next, ensure every voice is heard. While teams could use many of these tools by themselves, having a facilitator is vital to ensuring the best results. It's no different from any other potentially stressful meeting or event. A good ScrumMaster knows that if the team is really struggling, it is good to bring in an outsider to facilitate. Don't just hand over a tool to a team and expect good results. Large risk factors here are the organizational system and the team’s own emotional blinders. Organizations and people both are biased toward positive reviews, and without a facilitator, your end results can show a false view of the state of agile.
The last point is to make sure your tool follows the acceptance criteria for a good agile assessment tool.
4. Share the results: After completing the assessment comes the tricky part: sharing it with the team or organization. If an organization is early in its agile journey, the results will likely be bleak. You have to decide just how much to share by using the classic retrospective style of focusing on future improvement. Instead of showing a detailed assessment filled with low scores, share a summary report with suggestions for improvements that will reduce time while improving quality and team happiness.
5. Keep observing: It’s important to once again conduct an observation, this time mapping it to the assessment questionnaire and observing how the team performs firsthand. This is the bridge between passive and active involvement. You've assessed the team, you’ve briefed them, and you've identified areas of improvement. You now get to see the team in action for an extended period of time. And unlike during your earlier observations, the team now has a road map and near-term backlog of things to work on.
6. Rinse and repeat: Incorporate that agile standby, the inspect and adapt loop. Every three to six months you should check on the team’s progress and aim for continuous improvement.
Choosing an Assessment Tool
There are dozens of assessment tools available. Some are free, and some cost money. Some don't come even close to meeting the acceptance criteria for a good test, and some have Ph.D. theses backing up their data.
Rather than skew opinions with my own personal preferences, I want to refer you to Ben Linder's excellent compilation of tools. While his own self-assessment is on the list, Ben has done a great job listing, without bias, all the tools he's run across. My last client used his list as a starting point for deciding what tools we wanted to incorporate into our assessment model.
Let Your Data Be Your Guide
When a sprint is over, you know if it was successful. You can look at the backlog and see what is done and what's not done. The team can look at their velocity to determine the next sprint’s capacity, and there are several reliable ways to forecast with the sprint data.
An agile assessment model gives you the same level of awareness for your agile transformation. You just have to make sure you're working from good data.
Thank you Joel for your kind words about my list of Self-Assessment checklist and tools. Much appreciated!
I like that the author advocates for four general outcome-oriented categories, instead of a list of practices. Practice lists are deeply flawed for assessment because practices are very dependenet on the situation, and also some "non-Agile" practices sometimes make sense for the situation - and practices evolve over time - practices that were seen as "Agile" are now in question (e.g., team rooms).
The one nitpick I would make is that the assessment items, "Ability to deliver a working, tested product", needs to include something about speed. After all, Agile is about agility, which implies speed: a glacier is not very agile. Indeed, the importace of automation is often overlooked, yet as Ron Jeffries has said, you cannot really have an Agile team that does not use automated testing. Agile is more than just a set of behavioral paradigms: it is also about speed and flexibility.