The #NoEstimates hashtag is, effectively, a container. Within that container are many different conversations happening about estimation practices, the resulting estimates from those practices, and the business behaviors driven by these estimates.
A common reply from the skeptics is, “#NoEstimates is a solution looking for a problem. If teams want to improve they shouldn’t dump estimation, they should work on making better estimates!!!”
False assumptions about what #NoEstimates means aside, improving estimation is an interesting idea. But what does that really mean?
The Problem of Uncertainty
Let’s start with what makes estimation difficult: uncertainty. We live in a world of uncertainty, and software projects provide a great microcosm of the types of uncertainty we face in our lives.
This uncertainty can manifest itself in two ways: accidental complication and essential complication.
Accidental complication includes the things we can improve. Craftsmanship, code quality, development processes, and management practices are all examples of the types of accidental complication we can reduce and manage.
Essential complication is inherent to the kind of work we are doing. The difficulty level of a problem you are trying to solve is an example of essential complication.
J.B. Rainsberger introduced these terms at Øredev 2013. He invited the audience to consider the idea that accidental complication often dominates the cost of a feature, thereby making estimating the cost of a feature an exercise of measuring dysfunction in your software practices and codebase.
This means our estimation is often not driven by the difficulty of the problem we are trying to solve, but rather by the health of our codebase, the quality of process, and how much discipline we have in our management practices.
It reminds me of the old joke about a junior developer asking a senior developer how to make sure their estimates are “right.” The senior developer replies, “Easy! Take your estimate, multiply by two, and add two weeks.”
This is why project management offices have created Excel spreadsheets that add percentages (padding) to estimates based on past outcomes—to try to come close to accounting for the accidental complication within their organization. There are times when fixing systemic issues is cost-prohibitive, and this approach can work.
However, if you want to improve your estimates and you agree that accidental complication dominates the cost of a feature, then agile and #NoEstimates thinking can have the biggest impact on your team’s success.
Reducing Accidental Complication
Accidental complication can exist in many parts of a project, but here are three areas where many teams can start to reduce uncertainty and improve the quality of their estimates: craftsmanship, software development processes, and management practices.
Craftsmanship
Craftsmanship is inherent to agile practices. It is called out explicitly in one of the principles behind the Agile Manifesto: Continuous attention to technical excellence and good design enhances agility.
A lack of craftsmanship can—and often will—slow down your ability to ship valuable software that delights your customers.
How much technical debt exists in your codebase? Do you refactor your code regularly? Is test-driven development in your toolbox? How about complications from legacy code? Do you have dependencies on other teams and software? How are these dependencies managed? Can you deploy features frequently?
Many teams can benefit by adopting test-driven development and reserving time to refactor messy areas of their codebase. If you’re unsure where to begin, this could be a great experiment to try first.
This is also a prerequisite to adopting #NoEstimates practices. A clean codebase makes rapid delivery and feedback possible. It also allow for simpler story slicing. Without these basics, #NoEstimates is risky.
Software Development Processes
Scrum is deceptively simple. Yes, the Scrum Guide is only seventeen pages long and can be taught during a two-day course. But the skills needed to play the game of Scrum well are far from simple, and uncertainty can creep into the Scrum team’s efforts.
Relative estimation (planning poker) is highly susceptible to volatility when accidental complication is high. Is your work too big? Teams that are working on epics instead of stories often discover that their initial assumptions about the work were incorrect and need revising—and so does the estimate.
The same types of problems can happen when the stories are unclear. Without enough detail about what the customer wants and how it will be measured (acceptance criteria), rework and revisions can happen.
Is everything priority number one? If so, work in process can get too large, which means everything is 90 percent complete but never finished. Lack of prioritization also leaves the risky stories scattered in the backlog. This can leave uncertainty and risk lurking far too deep in a project.
Partnering with your product owner is essential. At the core of every project is a well-refined and prioritized backlog. At the core of that backlog are stories. Spending time on what a good story means within your organization can help resolve many of the accidental complications listed above.
Management Practices
Does your organization know the difference between a target, a commitment, and an estimate? Does your management team know why they are asking for an estimate? Have you helped them answer that question? (It’s a partnership, right?) Can you say no to the estimation negotiation game?
Asking these questions can help uncover areas of accidental complication within your management practices.
Changes in technology, assumptions, and team members also impact your ability to estimate well. And multitasking is a silent killer of productivity and predictability. The context switching that occurs when people change tasks eats away at precious time and invalidates any estimate given about the tasks being juggled.
All of these events can act as a trigger to re-estimate the impacted work, so team stability is critical.
Reconsider Your Estimates
Uncertainty is what makes estimation difficult. Organizations have many options to consider when trying to account for the accidental and essential complications that uncertainty causes.
For teams looking to improve their estimates—and, in effect, reduce their accidental complication—the best advice I can give is to adopt test-driven development, level up your product owner skills, and partner with your management to help them learn the impacts of bad estimation practices.
And talk with your organizational leaders about whether playing the estimation game is necessary. Questioning the need for and use of estimates is at the core of the #NoEstimates movement.
User Comments
Up until the very last paragraph, there's little to disagree with in this post. But one does marvel at how any of it can be claimed as "#NoEstimates thinking". It's all fairly obvious, and none of it is new. When a team has issues with its ability to make useful estimates, the kinds of root causes for that (e.g., code base cleanliness, management practices, team stability, and so on) that are identified in the post are standard suspects and areas for scrutiny/improvement. Read any standard book on (gasp) software estimating, such as McConnell (published 2006).
Claiming that scrutiny of those basic, well-known root causes is somehow "#NoEstimates thinking?" Please. With inclusion critieria like those, I would expect truisms like "don't run with scissors" and "brush three times daily" to be mentioned next.
Let's turn to the last paragraph now, which is the kind of illogical leap we see again and again from #NoEstimates advocates: "talk with your organizational leaders about whether playing the estimation game is necessary. Questioning the need for and use of estimates is at the core of the #NoEstimates movement". Yet, nothing in the article to that point talked really at all about questioning the need for or use of estimates; rather, it earnestly explained some basic, obvious precepts in how estimates can go awry and how they can be improved. This disconnect makes about as much sense as if I explained at length how to eat nutritionally, and then mentioned casually, in a final paragraph, "the core of my movement is questioning the need to eat at all."
Ryan, as Peter said, not much to disagree with here.
But yYou have "redefined" the two sources of uncertainty though. They are defined from the mathematics of decision making in the presence of uncertainty as
Epistemic uncertainty comes from the lack of knowledge. Epistemology is the study of knowledge. This uncertainty can be reduced by gaining knowledge. Agile is a good way to do that with an incremental and iterative production of working software to test the validity of the ideas. Epistemic uncertainty has a probabilistic process. The probability that the code won't work, the probability that we've hired staff that doesn't understand the problem.
Aleatory uncertainty comes from naturally occurring statistical processes of the project. These uncertainties cannot be reduced. The only way to deal with aleatory uncertainty is with Margin. Schedule margin, cost margin, technical margin.
Here's a briefing deck used in our Software Intensive System of Systems domain for how to deal with both reducible and irreducible uncertainty https://www.slideshare.net/galleman/managing-in-the-presence-of-uncertainty
This is the standard mathematics of managing in the presence of uncertainty mandated by the contract regulations that is applicable to all project work, no matter the domain if you remove the reporting formatting requirements.
In the decision making processes in the presence of uncertainty, estimates are needed. They are not optional.
I would suggest a read of https://www.pearltrees.com/s/file/preview/140338315/DecisionAnalysisfort... will show how...
No decision in the presence of uncertainty (Epistemic or Aleatory) can be made without estimating the impact of that decision.
This is not opinion, this is not personal experience talking, this is a mathematical fact of the theory of decision making. Ignore this theory and the principles on which it is based at your own risk.
So the conjecture of talking about whether estimating is necessary starts with the Value at Risk of course. Low risk, or low value in higher risk, estimates can be and many times are not performed. "We're willing to just try it and see what happens."
Without an assessment of the Value at Risk, that statement has no basis in fact for those paying your salary.
This is the fundamental misunderstanding in #NoEstimates. It's not your money. If it is your money do with it as you wish.
Other than the last paragraph and the misdefined sources of uncertainty the bulk of the post is useful for those just coming to the estimating experience.
For those interested in the principles, processes, and practices of estimating in the presence of uncertainty on agile projects Chapter 5 of this resource might provide some help https://www.slideshare.net/galleman/agileevm-bibliography-v2
To be pedantic, (part of all our jobs isn't it?) ...
Uncertainty doesn't make estimation difficult. It simply reduces our confidence in the prediction we make. It's no harder task, it's just that we have to add the caveat that we have less/little confidence in our prediction. If we knew with some certainty the *level* of uncertainty we could be confident in our prediction of X hours plus or minus Y. If there was no uncertainty, we'd simply trot out a prediction from a proven, trusted formula. (and it wouldn't be an estimate any more - it would be a certainty).
The problem with estimation in software is we only get one chance to be proved right or wrong (i.e. within our predicted tolerance). We're wrong a lot of the time. I think stakeholders understand this problem more than we do. We just feel too guilty.
The real challenge with estimates is that we make *promises* underpinned by estimates that we have little confidence in. So we feel that we are almost lying. If delivery takes longer than our estimate, we feel responsible. Problem is, stakeholders take our estimate and ignore the caveat. Perhaps we should only ever offer an estimate of X + Y. Then, if we fail, some guilt is due.
If we offer a project manager or product owner an estimate that is 'high' (whatever that means), we might be perceived to be incompetent, or cautious, or incapable of delivery. All bad. So, we start gaming the estimate. We pitch high, too high and are beaten down to a more realistic/acceptable number; the PM thinks they've 'got a good deal' and you might be a tad more confident in delivery (to a timescale you kind of trust. Sort of).
But it's dishonest. And self-defeating - your estimates will never be trusted by the PM again.
If we truly understood what we are doing, and estimated 'well', our estimates would be lower and higher than the real outcome in equal measure. But it's almost unheard of that we deliver uncertain things early. There's a chronic level of self-deception going on here.
"Things take as long as they take". It's not predictable, but we can do a better job of prediction with uncertainty, I'm sure. We just need to understand what causes the uncertainty better.
I can't agree with your summary statement enbding:
If you want to improve your estimates, then agile and #NoEstimates thinking can have the biggest impact on your team’s success.
Perhaps you'd better define 'improve' above? Is that reducing the numbers or being closer to reality? I've no idea how 'being agile' improves estimates (except by reducing scope to simple stories, I suppose?) And as far as I can tell NoEstimates thinking is really improving the way we negotiate the time and cost of delivering value to our stakeholders. At any rate, if the cause of poor estimate is as you say in the article, then the way to improve our accuracy is...
If, as you say:
Estimation uncertainty in software projects is often not driven by the difficulty of the problem we are trying to solve, but rather by the health of our codebase, the quality of process, and how much discipline we have in our management practices.
Then, we could improve our estimates by:
So, our estimation accuracy improves as we get better at doing the job. I'm not sure there's any great insight there.
Paul,
While everything you say about the difficulties of estimating, those are symptoms. Finding the root cause of the "dysfunctions" is needed before any improvements in the estimating process can take place.
This is a universal issue in all domains, not just agile software development. There are professional societies (ICEAA, AACE, PMI, CPM), government agencies (PARCA, all Federal agencies have "cost estimating" departments), as well as many corporations.
The misbehaviors of people - making promises and confusing them with estimates is also an observed dysfunction.
But None of these remove the Principle that making a non-trivial decision in the presence of uncertainty requires we make estimates of the outcome of that decision.
We can never "truly" understand what we are doing, that's the basis of Risk Management. Risk comes from uncertainty and uncertainty comes in two forms - reducible and irreducible. But we can construct models of these uncertainties, make measures when we have data and construct derived models by which to make decisions
When a direct measure is available, and objective measure is possible. When predicting an outcome a subjective measure is used. The likelihood of the possible outcomes can be expressed quantitatively in terms of probabilities to indicate the degree of uncertainty. These probabilities can be objective when the events they predict are the results of random processes.
If we start with these Principles and further understand that our weakness is to assume "estimates are wrong" and replace that with "estimates have accuracy and precision" and we need to define the needed "accurarcy and precision" before we start the estimating process, then it may be possible to escape the loop of poor estimating.
Paul and Ryan,
In our software intensive system of systems (google SISoS to see the paradigm) here's how we address the issues presented in this blog post
Measuring progress to plan in the presence of uncertainty, requires probabilistic modeling of uncertainties of cost, schedule, and technical performance in units of measure meaningful to the decision makers. 221
This meaningful information of the status of the program during its development is critical input to decision making. Any credible decision-making processes must account these uncertainties and the resulting risks they create that impact the probability of program success.
In the presence of uncertainty, Space System programs have the several important characteristics: 221
To assess the increasing or decreasing program probability of success for the characteristics described above, units of measure are needed to define this success.
Each of these measures provides visibility to Physical Percent Complete of the development or operational items in the project.
This is the basis of making credible estimates, taking credible actions based on these estimates to correct or prevent the risks that result from these uncertainties - all needed to Increase the Probability of Project Success.