Merging Waterfall and Agile: Across the Seven Seas

[article]

This is the story about how an onsite/offshore team delivered a fixed-bid project using agile practices. The delivery effort was very successful. This article highlights our approach, challenges and successes.

Gingerly Steering Away From Waterfall Toward Agile
In early 1990s a consulting practice at our company was created that specialized in delivering large fix-bid retirement administration systems. This practice delivered systems using a custom waterfall approach wherein requirements, design, development, testing amp; rollout phases were executed sequentially. The client spent significant time early in the project and then would finally see the system after the project had spent over 70% of its resources.

While it varied from project to project, the group generally delivered systems in two releases, the first one approximately 18 months into the project and the second one 12 months later. In our situation, the team was experiencing delivery challenges and had incurred significant variances on the first release. Our leadership (client amp; consulting firm) was unhappy and losing confidence.

The project delivery executive determined it was time to do something different. He proposed changing our delivery script and moving things earlier in the lifecycle.

Our Practice Executives Weigh In
Once our practice executives found about our intent to do iterative/incremental development, they hit the panic button. The practice had tried similar approaches before, but thought of it as a laboratory experiment. They had a number of fires burning and little appetite for something different. They hit us with a barrage of questions:

"You have a large offshore component (approx. 60%). How do you plan on managing work iteratively with them?"

"This is a fixed-bid project, what part of it don't you understand"

"Fixed-bid projects need to have signed-off requirements and firm scope, and iterative approaches don't apply, do they?"

"When and how will you baseline scope?"

All great questions! We knew this approach would work, since we had seen similar things work at a dotcom venture, but we recognized this was lot bigger and complex (approx. 65,000 hrs of effort). We could articulate some things but did not have all the answers. We knew we had to adapt.

Fortunately for us, our project delivery executive led the way and posed the questions back to the group saying, "What other options do we have?" He reflected that something had to change or else the end result would be similar to previous delays and cost overruns. After more discussions it came down to the practice executives staring us down and saying "Okay, in that case you will have to make this work and you are accountable for the outcome." We were not quite sure if this was a vote of confidence or a threat, but we believed in this approach a lot more than waterfall.

Our First Move
One of the most valuable recommendations our delivery executive provided was for us to visit the offshore team. We had two offshore centers in Chennai and Bangalore. For this one week orientation/kick off, we got both teams in Chennai and instead of focusing on technical and functional orientation (which had been the norm until then), we used Lego toys, tennis balls and tinker toys to play games that focused on creativity and innovation.

The group had a blast! We then discussed the new iterative/incremental approach.The team seemed open minded. We followed up with detail sessions with team leads who ultimately were going to lead them on the new path and be our eyes and ears 8000 miles away. This was a very different approach than what this group had been used to. They had been used to taking orders and the thought that they could now help drive delivery was respectful and empowering (we found out later that other offsite teams started pleading for a similar kind of interaction). All this and a lot more was now going to change.

Experimenting with Agile Practices
We had a lot of constraints. We had a team of 25+ people (onsite plus offsite) and while this was a great team with lots of smart people, the group did not have iterative / incremental background. We had to play coach and mentor. We knew we had to adapt and the best thing to do given the situation was to use a hybrid methodology approach. The Agile practices we leveraged included:

Time-boxed iterative development – We broke our delivery into six iterations, each six weeks long. These iterations contained requirements (typically one iteration ahead), design, development and testing activities. At the end of six weeks, the code was delivered into a user test environment. Our users were now able to play with the system and provide feedback as early as 4 months into the project. (This typically would not have happened until 10 months into the project.) We had succeeded in moving things earlier in the lifecycle. We would have preferred shorter iterations. This was an adjustment we felt we had to make given the team's lack of experience and the distributed nature of our development. Even if we wanted to deploy iterations in production, we could not have done it since the first release of our software (being done by a separate team) wasn't ready to go live.

Lean management, self contained and empowered teams – The team was split into three functional teams, with some onsite and offsite component. The onsite team provided requirements, training and management leadership while the offsite team provided design, development, and testing leadership. For the first time, offshore teams were now empowered to plan their own assignments. In the past, this typically was decided by an onsite manager. The three offsite functional teams reviewed the use case catalog and identified components to be designed / built / tested within iteration. Components that were not completed at the end of the iteration moved into the next iteration. The offshore team had greater control of their destiny, and was able to make necessary adjustments at the right time. This gave the team a feeling of having ‘skin in the game' and increased accountability.

Scrums – Given our distributed delivery environment, we used two streams of Scrum. The first one focused on all onsite team members. We had two onsite centers: Columbus, Ohio and Santa Fe, New Mexico. For many months we held teleconference meetings every business day at 7:00 pm EST. Our goal was to over communicate in the beginning and then tone down subsequently. The 15 minute teleconference call focused on "What issues are holding you back?", "What have you accomplished today" and "What are you planning to do tomorrow?" Respecting people's time, we stuck to our 15 minute commitment. The expectation was for the team members to hang-up after 15 minutes and that if we could not manage a 15 minute Scrum on a daily basis then how could we possibly manage six week iterations? We must say, except for handful of situations where we sought team's approval to extend the 15 minute meeting, by far a vast majority of our meetings got done in 15 minutes or less. The second Scrum stream focused on interaction between onsite and offsite members of functional teams. We were very ambitious in our thinking that these would be daily and structured, but then weeks into the project, given the time zone differences they morphed into 2-4 meetings a week and became more detailed information exchange sessions. We were successful in applying the Scrum concept to distributed onsite teams but these took a different shape for the onsite and offsite team interaction.

Detailed velocity measurement – In the spirit of trust but verify the project leveraged detailed development velocity metrics to report progress. This metric was updated weekly and reported on actual development progress versus planned progress. The metric used iteration plans created by offshore team and reported progress and calculated schedule deviation on a weekly basis. Using a fast paced iterative / incremental approach put a higher onus on control metrics. The rigor applied in planning and reporting was much deeper and, contrary to conventional wisdom, more rigorous than traditional waterfall development tracking. Not only did we track features developed, tested, delivered, and components developed, tested, delivered, but we also were constantly juggling resources to maximize iteration throughput. This focus on delivering maximum functionality each iteration helped the team stay on top of their game right from the very beginning versus the old waterfall approach where things often start slow and then pick up as phase deadlines near.

Continuous Integration and Automated Regression Testing – This practice probably provided us the biggest bang for our buck. Code that was developed from multiple locations was checked into a single repository and a build was generated every night. We used a rookie (fresh graduate with no business background) to develop automated regression testing scripts. We started slowly (as iterations progressed), creating a suite of test scripts that validated key functionality to ensure that the build was functional. This practice saved us from putting bad broken code in front of the client and helped develop trust! We truly started living up to their expectations.

Paired Programming – While financially it's a lot easier to implement paired programming offshore than onsite, we did not do this all the time. We cherry-picked assignments that were more complex and had higher impact and assigned paired teams. We particularly remember one complex component where we were able to develop with very high quality in 1/3 of the scheduled time. This had never happened before. Over a period of time this turned out to be a great blessing as the job market in India was sizzling hot. Developers acquired good skills and then flipped jobs for a 25% pay raise. Our paired programming approach helped retain some of our background and experience. If we had to do this over, we probably would have had a higher percentage of paired programming (but still not do the whole project that way). Our experience is that not all team members are crazy about this approach, this is a team sport and ‘super stars' and ‘MVPs' do not fit well.

Retrospectives from iteration – Given the newness of the approach; we periodically sought input during iterations and again at the end of the iterations to collect and apply lessons learned.It helped us identify not only micro level process and resource adjustment opportunities but also macro level dynamics and motivators that then helped influence a new delivery culture. Moving things early in the lifecycle forced issues and failures earlier on and helped us fine-tune and adapt while we still had time to recover. Our lessons learned ranged from: identifying true productivity rates, getting a much better feel for our resource capabilities, understanding our bottlenecks, etc. Perhaps the best thing it did was to provide the team a sense of purpose, an opportunity to make a difference and an incredible team experience that years later some of us still have very fond memories of.

Test driven development – Our development team was required to create unit test plans prior to starting development. This was a new concept to the team members, who would rather have jumped into coding. While the offsite development leads oversaw the process, the onsite team had visibility into the repository where all work products were saved. The developers reviewed the use cases/design and created unit test plans that were reviewed by their leads prior to starting development. This was especially beneficial for junior developers who had little business background.

Summary
While technologies and processes have improved significantly since 2002, the above experiences reflect adaptations of a larger Agile philosophy to a specific project instance. And while some schools of thought may posit that Agile should only be applied in its entirety, we think teams have the potential to realize delivery efficiencies by implementing a subset of Agile tenets that are prudent and relevant. We leave it to you to judge our Agile maturity. The key, we believe, lies in solving business problems using the best possible tools for your individual situation. In our story, our release was a resounding success. We delivered a great quality solution within aggressive timelines and budgets. Our Agile story may be different then your Agile story and we think that's fine, or as Thomas Harris would say, I'm OK - You're OK.


About the Author
Harish Gopalani is a PMP with over 18 years of application development background and holds a degree in Mechanical Engineering. He currently works with Nationwide Insurance and has diverse implementation experience which ranges from developing mainframe, client server and J2EE solutions and implementing products such as SAP and CODA. Harish is based in Columbus, Ohio and can be contacted at [email protected]

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.