This is where it all begins:
Requirements must be stable for reliable results. However, the requirements always change.
We don’t want requirements to change. However, because “requirements change now” is a known risk, we try to provoke requirements change as early as possible.
–Niels Malotaux “Evolutionary Project Management Methods”
It’s been almost ten years since a group of seventeen met in Snowbird, UT to “invent” agile—their name for a loosely related group of methodologies variously termed “light,” “rapid,” and “extreme.” In point of fact, more than twenty years earlier, developers—hardware developers at that—were doing “agile” projects in Japan, and they even called their practices “Scrum.” Some say that key agile management ideas reach back yet another generation to the US Space Race to the moon.
The connection between hardware and space development methods and current agile methods—which are largely centered in software development—is risk. The original motivation for most agile practices was to reduce risk—the risk that the product being developed might not work as advertised, the risk that the product might not do the job the customer needed doing, and the risk that over centralization of project control might dampen innovation.
Points of Focus
Notice from the discussion thus far that cost, schedule, and scope—the traditional points of focus for most managers and analysts—are not the main points of risk management and control in an agile methodology. Agile projects focus morekeenly on those things that produce value for the customer, and business analysts focus on the specification and measurement of the value delivered and the value impact on the business.
In the Beginning, There Is a Concept
In the beginning, there is a concept, and the concept may not be fully formed or internally consistent. Thus begins enterprise analysis of what the enterprise intends to gain from the project. In the beginning, the concept represents only a top-down value allocation to what will be the likely detailed requirements. It takes some clever elicitation to draw out the end-state and begin to assemble actionable requirements. But, there is a hazard to which business analysts must be alert: anchor bias.
Anchor bias arises when questions are asked for which there may be an initial position, assumption, experience, or just a setup in the question. The initial conditions are called the anchor, whether it is held by the interviewer or the person answering. The problem is that even knowledgeable people begin the thought process of answering by making mental adjustments to the anchoring position. Too often, these adjustments are self-limited, thereby foreshortening the range of possibilities envisioned in the scope of the concept.
Consider this example faced by many business analysts: Validate a requirement priority: A stakeholder presents a need for a particular project feature; the business analyst asks for the most pessimistic and optimistic market reaction to this feature, because there are other features competing for the resources. The stakeholder “anchors” his thinking at a sales figure he has already proposed: 100,000 units will be sold because of this feature. However, to be responsive, the stakeholder responds with an “adjustment” for pessimism and for optimism and gives a range of 80,000 to 150,000 units. What he considers safely “inside the box” of a reasonable range around the 100,000 estimate.
Another analyst approaches the validation assignment differently. In an interview with the same stakeholder, the anlyst offers an anchor, saying: “Information from our benchmark database shows similar features have had little sales impact in the past. Please justify your 100,000 estimate.”
“Little sales impact” is outside the box, beyond what the stakeholder would have thought reasonable. Won’t everyone like this feature? Faced with “facts,” the stakeholders abandons the 100,000 anchor and anchors his thinking at “little impact,” trying to work out adjustment from this new benchmark. The stakeholder with either try to work upward from nearly zero to make the case for the 100,000 unit impact or will lower the estimate to get closer to the historical record.
Did the first analyst to go into the interview just accept the stakeholder’s anchor? Was the second analyst gaming the situation? Was “little impact” just a test of the strength of the anchor at 100,00 units, or was it a valid benchmark, suggesting something was missed when formulating the 100,00 until estimate?
Would a smart analyst ask both questions in the same interview in order to triangulate the real priority?
Anchor bias is a form of “the power of suggestion.” Even though anchored with our own beliefs, a suggestion may well shake our beliefs and establish an entirely different point of reference for adjustment. Because of anchoring effect on requirement analysis, as well as similar effects in budget estimates, anchor bias affects concept development, it affects cost and schedule estimating, and it affects risk estimation. Anchor bias is present throughout the project lifecycle. Anchor bias is what inhibits “thinking outside the box.” Sensitivity to the phenomenon on the part of the business analyst is required to mitigate its effects.
Freeze and Bundle; Rank and File
Agile really kicks in when requirement analysis, assessment, and management get underway. Agilists think in terms of a requirements backlog that is ever changing—thus the term “dynamic backlog.” Although the arching concepts may well be fixed and understood, the detailed scope–as represented by the dynamic list of requirements–is unlikely to be stable over the course of the project.
The solution to the stability dilemma is to freeze requirements during development. To make this practical, development must be parsed into discrete periods called “increments,” “sprints,” or “delivery cycles.” Parsing requires ranking requirements, insofar as they are known, according to sequence logic, importance, urgency, and practicality. Once ranked, requirements are filed in folders for each increment of development.
The governance rule is: During the incremental development, backlog cannot and does not change. The corollary is: Between increments, change is provoked. The backlog changes according to the assessment and evaluation of analysts who keep an eye toward value for the customer. Some backlog requirements are deleted as obsolete or not needed; new ones, not thought of before, are added. The whole deck is reprioritized and rebundled.
Requirement bundles and development increments don’t work very well without benchmarks to establish the likely performance of developer teams. Business analysts assist in gathering and evaluating benchmarks, the most important of which is to establish a throughput metric to be used in “throughput accounting.”
But, benchmarks also serve as a tool for establishing two categories of requirements within each increment, as illustrated in figure 1:
Must-be-completed requirements: This bundle is made no more sizeable or complex than what the benchmark forecasts as “doable” in the duration of the delivery cycle.
Complete-if-possible requirements: These fill out the white space in the bundle. Any requirements not completed go back into the dynamic backlog to be rebundled and reprioritized.
The essential matter is that the benchmarking and requirement-categorization process establishes an assurance of minimum completion—minimum throughput—and this process forms the core of the ongoing business analysis tasks of enterprise analysis and solution assessment.
Throughput, as a business metric, first came to prominence with the popular acceptance of the Theory of Constraints (TOC), postulated by Eliyahu Goldratt in the 1980s, a generation before “agile” was coined. In TOC, the essential matter is to optimize at the enterprise level—rather than the departmental level—the production of products and services that customers and users, either internal or external, want to buy and use. That is our definition of project throughput. In the agile world, throughput is produced in each delivery cycle.
Returning to the wisdom of Niels Malotaux: “If the customer is not satisfied, he may not want to pay for our efforts. If he is not successful, he cannot pay. If he is not more successful than he already was, why should he [pay]?”
“More successful than he already was” is the source of our throughput metric. Note this distinction: Throughput produces “output,” the application of which—to the business or by the customer—produces “outcomes,” like efficiencies, revenues, cost, or other scorecard results. Output is products or services. Outcome is the business effect of output.
Although project managers must, of necessity, be concerned for output and its cost and schedule, business analysts are most concerned with measuring throughput effects as improvements in the business. Agile business analysts focus on the difference between the value of the business before the project was executed and the value of the business after the project was executed. After all: “If he is not more successful than he already was, why should he [pay]?”
In figure 2, we see projects A, B, and C; the value of project B is built on top of cost expenditures. The overall cost of resources that a business chooses to dedicate to all projects is assumed fixed in the short term; any one project consumes some part of these fixed costs. Thus, value is shown as an “add” on top of a fixed cost base.
Agile is a response to requirements instability. In that sense, it is a risk-management strategy. The focus of agile is value delivered to the customer or user. The tactics within agile to deal with requirement instability are to freeze requirements incrementally, bundle them for short durations of development, and then reevaluate bundles after each delivery cycle. In the end, it’s all about throughput. Agilists measure success by the improvement in the lot of the customer and in the difference in business value before and after project execution.