Why are so many projects late — The science of project management

Markus
Power the People
Published in
7 min readJul 25, 2018

--

Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law.

Project delays — in many areas they are so common that they don’t even come as a surprise anymore. Think about the last time you worked with a contractor in your house, the last time you participated in a big business initiative or the last time you heard about another big infrastructure project on the news. Delays seem to be everywhere. On the other side, we spend more and more time and effort on ever-changing project planning frameworks and methodologies, training people over their career in everything from PRINCE II (R) to various shades of Agile. Why is there such a disconnect between what we think we can achieve (and manage) and the actual time and cost of complex things?

Daniel Kahneman and Amos Tversky¹ coined the term “Planning Fallacy” in 1979 and later expanded its definition. Today it describes the tendency to underestimate the time, costs, and risks of future actions while overestimating the benefits. The fallacy can be observed in many settings, from one-person tasks (such as the completion of academic papers² ) to the well-chronicled delays in major construction projects, such as the German Airport BER³.

Kahnemann and Tversky originally explained this fallacy with the planner focusing on the most “optimistic” scenario instead of using their full experience when planning a task or project. While this certainly happens (in my personal experience big companies tend to emphasize “sunny” or “best case” scenarios while planning but forget this emphasis once the predictable delay occurs), even frameworks that specifically account for both known and unknown risks tend to underestimate the total length of a project. Some of the reasons for this are:

  • Optimism Bias: The most obvious explanation for underestimating projects is optimism bias. People tend to be optimistic about their personal exposure to risk (i.e. they underestimate their personal risk⁴) and hence tend to be optimistic about their project’s exposure to the same risks.
  • Self-Serving Bias: Related to the optimism bias, human beings also tend to experience a self-serving bias in their own past performance by taking credit for things that went well but attributing failures to external factors⁵. This leads people to incorrectly recall how long events took in the past and hence to underestimate future projects. It also, together with the optimism bias, leads planners to underestimate the recurrence of events that have delayed a project in the past, such as vacations, illnesses or the time it takes to plan a meeting.
  • Focal Bias: Focal bias describes a bias in which the planner concentrates too much on the part of the problem or project that he knows most about and uses this information to frame the planning decision. As the part the planner knows most about is usually also the safest part of the project (in terms of planning) this leads to underestimation⁶.
  • Anchoring and Adjustment: Anchoring and adjustment describes a mental process for judgments under uncertainty, in which an initial “anchor” value is chosen and adjusted based on the information present⁷. Typically, humans under-adjust their initial anchor in this process, hence the results heavily rely on the chosen anchor. This can even be shown if the anchor is obviously irrelevant or nonsensical⁸. In the context of project planning, the reality in many companies is that projects start with so-called “rough estimates”, which are often little more than wishes. However, those wishes have a heavy influence on the final planning outcome by providing an “anchor” for further updates.
  • Motivations: Counter-intuitively, a highly-motivated and involved project planner is actually counter-productive for planning success. Individuals who are highly motivated to complete the actual tasks (not only the planning of said tasks) in a quick manner underestimate the total duration of the tasks and hence underestimate total project time⁹ . In business, a similar effect often leads to individuals underestimating tasks that they personally enjoy compared to tasks that they find tedious or boring.
  • Perceived Power: When individuals perceive a sense of power, they are especially vulnerable to the planning fallacy¹⁰. The reason for this is the a perception of power tends to narrow the individual’s focus on the principal goals of a project, leading them to disregard peripheral information. This also means that they tend to disregard prior experience and possible obstacles to the project.
  • Authorization Imperative: A business-specific reason for incorrect project planning can be found in the “Authorization Imperative”. This describes the reality in most businesses of requiring authorization (often in the form of an approved “business case”) to start and lead projects. This leads to a strong motivation of the planner to report relatively low project costs, which are usually directly and indirectly tied to project length.
  • Focus on opportunities vs obstacles: An interesting effect was found between focusing on opportunities and and obstacles of a project. Depending on your mental distance to the project (both in time and involvement) focusing on either opportunities or obstacles can make the planning fallacy worse. This is especially true for projects that are mentally close to the planner (e.g. a project that should be started soon and has been worked on for a while). In this case, if the planner is not actively engaged in visualizing obstacles, he tends to disregard them in the final plan¹¹.
  • Asymmetry and Scaling: First popularized by Taleb, asymmetry and scaling refer to the principle that random events are not equally distributed in either impact nor frequency and that negative consequence scale with the complexity of the system¹². Therefore, it is easy to underestimate the negative consequence of random events on a project. This is especially true for complex projects, where positive events usually have a relatively minor impact while the impact of negative random events can essentially be “unlimited”.

Knowing all of these imperfections of the human mind does not necessarily help us improving our project planning capability. However, there are some strategies that can help generating better plans in the future:

  • Segmentation: Segmentation describes the breaking down of larger tasks into smaller, more manageable pieces. Those pieces are then estimated separately to come up with a total plan. The principle behind Segmentation is that reviewing each piece individually will give more depth to the planning picture and will usually lead to the sum of smaller tasks being more than the estimate on one bigger task¹³. Many modern software packages use segmentation techniques to improve planning and execution of projects.
  • Reference Class Forecasting: A completely different approach to classical project planning is called “Reference Class Forecasting”. This methodology ignores the classic principles of Project Planning such as identifying dependencies and planning discrete tasks and rather identifies “reference projects” that are similar in size and scope and attempts to establish a planning horizon based on those real-life examples. This methodology has been successfully implemented in large government-sized projects¹⁴. Reference Class Forecasting is especially useful to find the correct anchor (see anchoring) for a plan before going too far into detail.
  • Impartial Planning: Given that many of the biases describes above especially apply to individuals that also execute the tasks that were planned, it can be useful to have impartial planners who will not participate in the project itself plan the project or at least parts of it. The classic Project Manager which is not responsible for any part of the project delivery but only its organization is born from this idea. This helps especially if a project has many inter-dependencies and moving parts, as an impartial planner does not have a personal preference (e.g. being especially efficient in his or her own tasks) and will prioritize the overall project timeline above all else.

An example that combines these approaches into one was an ERP roll-out I was participating in earlier in my career. In this particular case, the total project consisted of many individual implementations in several locations within one company, following a common business process template and using the same solution.

We were able to increase predictability and precision in planning by using four concrete improvements:

  1. First, we obtained rough estimates for a single implementation (one location) by talking to the vendor and external companies who had used this software to give us a baseline (reference-class forecasting)
  2. Then, we used a resource who was experienced in projects of this scope without knowing the specifics of the product to build an overall, high-level roll-out plan (Impartial planning)
  3. Third, we split up the high-level planning into small pieces to make each step of the way concrete and understandable (segmentation)
  4. And lastly, we used the learnings from each individual implementation to improve the plan for the following location in the same way

While our plan still was not perfect, it allowed us to forecast Go-Live dates and a roll-out schedule to a much higher degree than we thought possible at the beginning of the roll-out (we hit all our targets for the first wave of implementations!). Project planning might still be more art than science, but by following what research has shown to work, you can definitely improve your plan!

Thanks for reading and hopefully the ideas above can help to improve the next project plan or, at least, understand the risks hidden in the neat slides and GANTT charts we are all used to.

References

1: Kahneman, Tversky 1979
2: Buehler, Griffin, Ross, 1994
3: https://www.telegraph.co.uk/travel/news/berlin-new-airport-delayed-again/
4: O’Sullivan,2015
5: Pezzo, Litman et al, 2016
6: Buehler and Griffin, 2003
7: Kahneman and Tversky, 1974
8: Strack and Mussweiler, 1997
9: Buehler, Griffin and MacDonald, 1997
10: Weick and Guinote, 2010
11: Preetz, Buehler, Wilson, 2010
12: Taleb, 2007
13: Forsynth, 2008
14: Flyvbjerg, 2006

--

--