Value First and #NoEstimates - two sides of a coin? Part 2: #NoEstimates
As I studied those approaches a bit deeper (I joined a key note and a workshop of Vasco’s at NovaTec and I attended a webinar with Kai, besides reading their respective books and other related stuff), I noticed that both value focus approaches have some elements in common but focus different areas of project work.
Part 1 of this small series of blog posts, Value First and #NoEstimates – two sides of a coin? Part 1: Value First provided a look at the key points of Kai Gilb’s Value First. Part 2 explains Vasco Duarte’s #NoEstimates. Part 3 compares both approaches.
Vasco Duarte: #NoEstimates
Vasco Duarte bases #NoEstimates on Lean foundations: Estimations are inherently waste. They do not add value for the customer nor do they add value for the business, since they aren’t fit for the purpose they are used for. Vasco cites an old statement classifying good estimation approaches as approaches that deliver estimates within 25% of the actual results 75% of the time. This means, estimates are not exact and have a very little confidence level. This results in the wide variance we see in agile and traditional (waterfall-like) projects. Vasco duarte aims at reducing such waste in a similar way to the Lean Production movement. Lean Production reduced inventories down to just in time production in the past. So let’s eliminate effort for estimations and focus on real value providing activities.
Why are estimates so bad?
Main reasons for the discrepancies between estimates and reality according to Vasco Duarte are:
- Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law. (Douglas Hofstadter – in Gödel, Escher, Bach: An Eternal Golden Braid. 20th anniversary ed., 1999, p. 152. ISBN 0-465-02656-7.)
- Parkinson’s Law: Work expands so as to fill the time available for its completion.
- Accidental Complication (Ordev 2013): organizational structures (e.g. how long does it take to get the approval for a new test environment?) and the changes made over time to accomodate to new functionality increase system complexity. The overall complication of a problem resulting from this accidental complication h(a) and the inherent complexity of the problem to be solved, called essential complication g(e), can be expressed by a function f(g(e), h(a)). h(a) usually has a much higher influence than g(e) when it comes to software development. This results in the fact that relative estimation or estimation by looking at the feature’s functionality alone cannot determine the cost of a feature. But that’s the way estimation is done in most initiatives.
- Inherent Complexity in software project is not predictable since it’s usually a learning endeavor. It resembles something like starting “by building a tent, then evolve it into a hut, then increment it into a trailer, and later on deliver a castle” (Vasco Duarte). I like that picture of a castle – think of a medieval castle, built over years. It reminds me of some big systems I came across – not all legacy systems….
What do we estimate for?
So why are we using estimates? Vasco comes up with three decisions or activities that are meant to be supported by estimates:
- Project sizing and budgeting
How many people will the team consist of? How long will it take? How much will I need to pay?
- Forecasting project progress towards a delivery: scope and time in meeting a release milestone
Will we deliver on time? What will we deliver at a given release milestone?
- Removing risk of failure
What contingency plans do we need? What buffers (time and budget) do we need to be on time within budget (also a bit of sizing and budgeting)?
How do we solve this?
This all comes down to following problems that are to be solved with better methods than estimation:
- Speed & Distance – without estimates
How fast are we doing progress? When will we be ready to deliver a certain scope? #NoEstimates measures speed by counting stories delivered in a sprint / iteration. As Vasco and others empirically found out this is at least as good as counting story points, but releases the effort of estimating story points. The distance to go is the number of stories (backlog items) in your backlog. Using past data from the last 3 to 5 iterations, you don’t need to estimate. You can observe and forecast the system “development initiative” by the real behavior of the system. System theory and Process Quality Management have some tools to help you in observation and forecasting, e.g. control charts and the accompanying rules of quality control. It’s really surprising how good these forecasts are.
Using this speed forecast you can now forecast how many stories you would likely deliver until a given release milestone. Using upper and lower limits of the control chart (using one sigma is best for development processes as Vasco experienced) you get a range of stories. Now you can start a discussion on the priorities and what stories you should include.
- Remove Risk of Failure – without spending all the time planning
How do we tackle the risk of failure? Vasco recommends to plan to survive and not to plan for the absence of failure like it’s done in many projects in real life. So build a robust system – the technical system you develop as well as the development processes – to survive in case of a failure. Use small cycles and feedback loops to reduce the risk to perish in case of a failure. Break down the work to remove risk not size! Vasco uses an interesting decision matrix for story break down (see picture).
- Deliver on business goals – without waiting on the end of the project to know if we get what we need
How do we know focus on the right things to deliver? How do we know early, if we’re on track? Vasco proposes daily goal driven experiments based upon the business goals, to work only on those experiments that get you closer to your business goals and to heavily experiment to validate your assumptions.
How to size team and how to set budget
For project sizing, Vasco proposes analogous estimation at a high level to set team size at project start. Then use the usual PDCA cycle (plan, do, check, act – the Deming cycle) to make adjustments as necessary. As a budget is a proposed investment into development, you shouldn’t base it on the cost but base it on the value side of the equation. To set an initial budget, use sizing as one part, the initial sum you are willing to invest as the second part. Then start with focus on solution delivery in small iterations. If you focus on the problems that matter the most (= high priority), you can always decide if you want to invest more or if it’s sufficient what you achieved so far. So budget setting is not mainly set by costs, but by value oriented investment decision.
#NoEstimate’s Main Takeaways
My main takeaways of Vasco’s workshop are:
- A project is a system and as such follows system theory. Performance is defined by system to a great extent. You cannot predict system behavior of a sufficient complex system, e.g. a development project. You can only experiment and observe the result.
- Size a project team initially using analogous estimation – or start with a small team. Then use PDCA cycle to make adjustments to project team as necessary.
- Set budget by a investment decision based on the value you want to achieve, if you need to set a budget. Check that decision and the result so far every few iterations or so.
- Focus on most important thing to solve – prioritization is key to success, not estimation
- Use #NoEstimates forecast for forecasting
What do you think? Is Vasco’s approach applicable at your place? Let us know your thoughts.
Don’t miss to read the final part of the blog series!