What to Review If You Can’t Review Everything

[article]
Summary:
Payson Hall shares with us a useful list of review criteria via a case study of a troubled software development project. Reviews can be messy. Sometimes it’s hard to know where to start, particularly when you are in triage mode and can only review a small sample.

The huge software development project was in trouble—integration testing was discovering significant issues that somehow escaped unit testing and confidence in the development vendor was plummeting. Cost and schedule overruns had been staggering. The sponsoring client was wondering what they had to show for their hundreds of millions of dollars invested to date: whether the project was “almost there” as was being reported or a black hole into which they could pour additional cash without making a difference.

The root cause of the problems weren’t subtle: The project started two years earlier with a vague notion of requirements and a fixed end date. Project scope was defined broadly and there was evidence of feature creep from the start. Trying to adhere to the schedule, the requirements process had been rushed. When concerns about requirements were raised, the project management team (inexperienced with projects of this size and complexity) had said, “We will correct any issues in testing.”

During design there was a half-hearted attempt by the vendor to establish traceability between product and requirements, but “There wasn’t time for all that process” so a spreadsheet was used that few people understood and that wasn’t maintained. The detailed design document was more than 15,000 pages long. The client team had (contractually) thirty days to review it and identify errors, omissions, and items requiring clarification. To facilitate review, the document was divided among a large team of people, each reviewing their own section. The thirty-day mark came and went, then the sixty-day mark ... ninety days ... individual reviewers met with designers throughout, identifying individual issues and processing them in parallel. There wasn’t time to look for patterns. There wasn’t time to understand the whole of the design. At the six-month mark, the client management team and the vendor declared design “sufficient” because much of the coding (which was happening in parallel) was complete and any further design issues would emerge during testing.

Unit testing reported going smoothly, but integration and acceptance testing—running concurrently because of schedule concerns—hit a wall. The number, severity, and “surprise” of the issues that emerged from “acceptance testing” resulted in gnashing of teeth, rending of hair, and a sudden one-year slip in the project. The year-long delay was probably what prompted the sponsor to request the project management review that got me involved.

Identifying the source of the problems wasn’t difficult (I imagine you see them in the preceding paragraphs). What was challenging for me as a reviewer was trying to decide what to recommend as next steps. The recommendations were due just as the project was scheduled to emerge from its one year “quality rehabilitation” period. Decisions were needed about whether to continue the project as initially envisioned, reduce scope and salvage the work to date, or euthanize the whole undertaking. Recommendations to kill the project would have been gladly received—sponsors were outraged that the project had gotten so far out of hand and a blood sacrifice would only begin to appease them.

The project review I was participating in was not technical, so I had no direct visibility into the quality of the technical solution. My concern was that the client might be just few payments away from owning a Ferrari. If the quality issues had indeed been addressed during the one-year hiatus, it might make sense to continue the project.

I recommended a technical quality audit to inform the go/no go decision—but the audit had to be done quickly or the impatient sponsors would make the decision without the input (they were getting torches and pitchforks ready for the meeting where the future of the project would be decided). I reached out to people who I trust for ideas about what to review: how to triage five million lines of Java code in the space of about two weeks to assess the project’s health and then inform a decision about whether to kill the huge project (forfeiting the investment to date) or invest hundreds of millions of dollars more rolling the product out.

User Comments

1 comment
Pramod Paranjape's picture
Pramod Paranjape

Peer reviews have worked very well for me. Project managers are happy to learn from the peers. Peer pressure works very well in any organization, so project managers are keen to protect their credibility. They correct the course based on the inputs from peers. Another advantage is that the good news spreads fast. If there are practices that reviewers like, they borrow those ideas and implement them in their projects.

Review criteria should include requirements traceability matrix as it is the most important document. It should be used to cross-check the acceptance criteria for the milestone deliveries.

Overall, this article is thought provoking. Thanks for sharing!

March 26, 2013 - 2:20am

About the author

Payson Hall's picture Payson Hall

Payson Hall is a consulting project manager for Catalysis Group, Inc. in Sacramento, California. Payson consults on project management issues and teaches project management. Email Payson at payson@catalysisgroup.com. Follow him on twitter at @paysonhall.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Oct 12
Oct 15
Nov 09
Nov 09