Accelerating the Adoption of Technical Practices

[article]
Summary:

Agile teams are supposed to take responsibility for how they work and how they learn. But what if you need to jump-start that learning? Agile transformation is about making this happen rather than waiting for it to happen. You need to get your team to learn the technical side of agile, and soon. Here are some effective approaches.

Agile teams are supposed to take responsibility for how they work and how they learn. But what if you need to jump-start that learning? Agile transformation is about making this happen rather than waiting for it to happen. You need to get your team to learn the technical side of agile, and soon.

An approach that we have seen work really well is creating training teams. These teams provide intensive, hands-on training in a classroom setting, as well as one-on-one coaching to reinforce the training. In 2012 one of us (Scott) created such a team for training a very large IT organization in agile technical practices, and this article recounts that experience.

The directive was to infuse knowledge of technical practices throughout all of the in-house software development teams across the organization’s many locations around the country. The practices to be taught included test-driven development (TDD), test automation, automated quality analysis, and continuous integration (CI). Rather than go through a lengthy narrative on each training strategy, we will summarize the approaches here and then discuss them:

  1. The training team consisted of two members (including Scott), each with strong development experience and experience with many agile technical practices and tools.
  2. We created a standard set of test tool stacks that could be easily installed. We started with test tool stacks to support testing for the most common development tool stacks in that environment (Java and Oracle).
  3. We trained entire teams at a time in person in a classroom, with some lecture, discussion, and mostly hands-on practice.
  4. The course took three weeks, consisting of two-hour sessions three times a week, minimizing impact on project work.
  5. We followed up with the teams in person after training them, using a coaching approach.
  6. We measured code quality, test coverage, and practice usage after the teams had been trained and reported their progress to management in a scorecard-style report.
  7. We kept track of which teams had been trained and reported that in the scorecard. This put pressure on group managers to get their teams trained.

The results of this were that adoption of agile technical practices advanced rapidly across the organization. Most teams were supportive of the new direction and were willing to give it a try and see how it worked. In one case, a team—one of the more experienced—rejected the use of TDD. Through the successes of other teams, the peer pressure resulted in this team eventually using TDD. To be fair, TDD has a very long learning curve, and it is normal to have some people resist because it is a very substantial change in how they think and work.

In another case, a team that was given a large, newly released application to support recognized quickly because of the training that the code they had inherited was fragile and difficult to test. The team reached out to us and received in-depth coaching on how to deal with the problematic code. Within less than a month, this team became self-sustaining and was already seeing improvements in the code quality.

Creating the initial training materials was a substantial task. It took two experts about four months to assemble an appropriate test tool stack and create the three-week course. We then started running teams through the course and followed up with those teams afterward by traveling to their location and staying for several months, sitting with the teams to see if they needed help putting what they had learned in a classroom into practice on their actual project. Our goal was to cement what they had learned in an actual work setting. Our approach was therefore both experiential and contextual.

How This All Works Together

Training an entire team together was very effective because it was possible to assess the outcome for the team’s work, and also because no one was missing from a team when others were not. The entire team was offline together, making themselves more productive as a group. This made it a team learning experience—they all shared a common training experience, and therefore shared training reference points. It was an involved journey. Management reporting later focused on team performance with the new skills rather than on individual performance.

Having a standard set of tool stacks was essential, both for training purposes and for sharing techniques across the organization. The intention here is not to standardize and prevent the use of alternative tools, but to create a common, evolving baseline to foster communication and cross-training.

Many organizations with remote locations want to use web-based training for agile methods. There is no reason web-based training cannot be used, and indeed there is web-based training available for some of the tools. However, we wanted to have an intense, immersive experience in order to ensure that developers were highly focused on the training in a group setting. The amount that needs to be learned is very broad and deep, and immersion is necessary. If the training were not in a classroom, then it is anticipated that people would not be allowed to devote sufficient continuous time to it, which is what is needed in order to ramp up quickly. The whole point was to bite the bullet and make an investment, then reap the benefits as quickly as possible.

Assessment is always part of a learning program. Any teacher knows this; you need to test students. The highly innovative Khan Academy system is built around continuous individual assessment through an automated dashboard and progression to the next step when assessment criteria have been met, allowing each student to progress at his or her own rate.

We realized that we needed to build assessment into our training process, both to assure that it was effective and to measure its effectiveness, which we felt was true to the empiricism of agile. Thus, after a team was trained, the team was strongly encouraged not only to record and display their code quality and test coverage, but also to demonstrate to the business their progress on a regular basis. This was done by integrating their test coverage tool, code quality tool, and automated build output into an aggregated dashboard, which we then shared with management in a biweekly presentation.

Reporting test coverage was mandatory and strongly supported by group management. Management was able to see the test coverage of teams that had been trained, as well as that more and more teams were trained over time. This helped to assure management that the very substantial investment of the team’s time in training was worthwhile. It also provided an important metric for agile adoption, because technical practices are a key part of agile. In addition to reporting test coverage, agile coaches working with each team provided subjective assessment of each team’s progress, generally focused on impediments that management could do something about or should be aware of.

We also took the same build, metric, and dashboard system and began using the same techniques for software developed by external solution providers. This enabled us to measure the code quality of that software. Using this approach we were able to convince contract management to add quality clauses into contracts. Overall, the use of code quality metrics helped in both supplier and contract management.

Our Space Is “Too Complex”

The same techniques also have been applied successfully in the embedded software and systems space. While this environment is significantly more intricate and has higher overall system complexity, the same delivery mechanism, teaching/coaching approach, and condensed, highly impactful training has been just as effective. Cost savings are more significant in this space, especially when moving testing away from hardware-in-the-loop testing to more automated software-in-the-loop testing, reducing capital expenditures.

So far, we have not found a software space that did not benefit from this approach.

Conclusion

The general approach used here was highly effective, and it can be replicated outside of the domain of testing and continuous integration. For example, we believe that it can be applied to enterprise architecture, governance, release management and deployment, security, and generally any specialized area that is part of the solution delivery pipeline. The legacy approach of having discrete steps that are performed by different groups in sequence then gives way to an approach in which the development team performs every function, but with the support of specialists through training, coaching, automated testing, and process oversight. The role of specialized groups changes from doing to teaching, from being a gatekeeper to being a guide, and from inserting speed bumps to helping teams to create a safer path.

About the author

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.