Skip to main content

Real-Time.Alignment.Performance.Productivity.Confidence

thinking about estimation
  • payson.hall
  • Mar 22, 2021

The Emptiness of Single Point Estimates

In freshman Physics, students learn not to report more precision in their answers than can be justified by the facts of the problem. Why are project managers still expected to report bogus completion dates that imply estimating precision that we don’t really have? 

How long does it take you to go from your front door to the gate at your local airport?  To be clear, I’m talking about baggage is checked and you are through security and ready to stand in line to board.  Write down an answer in minutes before you continue.  Go ahead, I’ll wait. 

To answer that question, you probably made assumptions, either implicitly or explicitly.  Is the cab waiting outside?  Will weather or traffic patterns slow the commute to the airport?  Is it a major holiday with heavy air travel expected? 

What was your approach?

  • Did you recall the best time you ever made and write that down?  That’s optimistic. 
  • Did you estimate the average amount of time required?  How often to you beat the average?  If you only allocated that amount of time, how often would you miss your flight? 
  • Did you adjust your estimate to allow for risk?  How often do you think you would miss your flight if you used your risk adjusted estimate? 

Reflect on your answers and you will learn something about your estimation process.  If you were brave and actually wrote down an estimate, would you want to change it or qualify it now that you think more about it? 

This thought experiment is to get you thinking about the variability in estimates.  If you are a road warrior like me, you have traveled to the airport hundreds of times.  You have a sense of how long it takes, but you also know the grief of missing a flight and screwing up your itinerary.  I typically plan to be at security an hour before boarding time and enjoy coffee and a leisurely stroll to the gate where I expect to wait for half an hour or so for my flight. 

Look at that lonely little number you wrote down - a lot of information about that estimate is hidden. 

Now imagine you are an executive sponsoring a mission critical project.  The project will take millions of dollars and consume significant resources.  You ask the project manager when the project will be completed.  They respond with a date eighteen months in the future,  

“January 26th, 2023” 

Pretty specific, isn’t it?  What estimation information is hiding in that answer?  How many hundreds of smaller tasks with imprecise estimates were aggregated to create that particular answer?  Were the smaller component estimates optimistic or pessimistic?  Has the project manager added a risk adjustment to each of the subordinate estimates?  Have they added a risk adjustment to the final answer?  Did they take their best guess and double it? 

Executives are rarely ignorant, but they may not have the project management expertise to even know to ask questions like that.  A project manager could bury an executive with footnotes and pages of assumptions to better document the context of the answer – but that would likely be overwhelming and not terribly helpful. 

What if we could help executive sponsors understand that there is no such thing as a “precise estimate” while still providing an answer to their reasonable question about how long the project will take? 

Let’s start by looking at an individual task.  Imagine your estimator told you that fastest (best-case) that a task could be done in was 11 days, likely duration was 18 days, worst-case was 28 days.  I’ve graphed the task duration on the X-axis below.  Then I superimposed a triangle to represent the probability of any specific outcome on the Y-axis.  You can see that the best and worst-case outcomes are not very likely, but they are possible.  The nominal-case is the most likely outcome.  Inside the triangle, I’ve put the numbers 1 to 100 to help visualize.  We could simulate the duration of this task by generating a random number between 1 and 100 and looking up the corresponding duration. 

 

 

If you are a nerd like me, you could open Excel right now and enter the formula 

=randbetween(1,100) 

Then copy and paste it a dozen times to generate hypothetical durations for this task.  The number generated isn’t the duration, but if you look your answer up on the triangle, you will find a corresponding duration at the bottom.  Hit recalculate a few times and you should see that most of the answers are near the likely case, with some outliers. 

Now imagine all project tasks in a network.  Using a simulation, we could “do” the project by telling the computer when the project started and having the computer randomly generate an “actual” duration for each task from the estimates.  The result would be a date and time of project completion after we accumulated all the durations. 

Now imagine we ran that simulation 10,000 times, generating different random numbers each time.  The output would be 10,000 end dates, which we could graph to show a more nuanced prediction of how long the project will take.  This is called a “Monte Carlo Simulation”. 

There are several tools that can provide a version of the graph shown below.   

 

 

The X-axis here shows the completion dates.  The right Y-axis shows the frequency of the given end dates from the simulation data.  The left Y-axis shows the cumulative probability distribution – a fancy way to answer the question, “How likely is it that the project finishes on or before the given date?”  This corresponds to the red hill climber that starts at zero on the left side of the figure and climbs to 100 by the time it gets to the right. 

Would this picture be more helpful to you as an executive than a point estimate like “January 26th, 2023”? 

Might this graph encourage more focused conversations about what is driving the variability? 

What if the market changes suddenly and the schedule must be reduced – would this provide a focused area to search for solutions? 

Historically, Monte Carlo tools were expensive, complicated, and could take literally hours to run.  That was the excuse used to justify ignoring this approach for decades at the expense of billions in lost profits and immeasurable grief for the people involved.   

With the computing power on most modern desktops, the ability to provide better data for executive decision-making is now a few keystrokes away.   

There are better options.  I encourage you to explore them, and if you would like to continue the conversation, you can reach us here

payson.hall

.

Add Your comment

Plain text

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.