Why quantitative modelling doesn’t always lead to better decision-making
Why quantitative modelling doesn’t always lead to better decision-making
Quantitative analysis is increasingly a requirement in the development of public policy. However, it does not necessarily lead to better investments as benefits are frequently overstated and costs underestimated.
The process is further complicated by the complex biases of stakeholders. We have found some simple strategies that can improve the use of quantitative analysis in decision-making. These enable policy makers to challenge their own cognitive biases, make analysts more accountable for their forecasts, and so better balance optimism with realism.
Quantitative analysis is now a must rather than an option in public policy
Before policy makers can have a new program or initiative approved, it is increasingly the case that they must provide quantitative analysis that demonstrates its merit. The most prevalent example of this is the use of cost-benefit analyses (CBAs). CBAs serve two purposes: to determine if a project or program is a sound investment in quantifiable terms; and to provide a way to compare diverse projects. This emphasis on quantitative analysis is part of a drive for more evidence‑based policy-making. It is also meant to ensure that, in an era of increasingly constrained government revenue, money is spent where it will be most effective.
However, a ‘numbers approach’ does not necessarily lead to better investments
Research shows that CBAs can uniformly overestimate the benefits and underestimate the costs of a project, leading to poor investments. For example, academic Bent Flyvbjerg reviewed over 250 transportation projects and found that in comparison to the CBA upon which they were approved, cost overruns of 50% in real terms were ‘common’, as were benefit forecasts that were overstated by 20-70% [1]. So, where the CBA was intended to inform better decision-making, it lead to projects with much higher costs and lower benefits than policy makers or the community would have accepted. Comparative studies suggest similar problems are present for a wide range of other projects such as ICT systems, mega-events (such as the Olympics), and urban and regional development projects.
In our work with clients, we also see the pitfalls of using CBAs. Quantitative analysis can sometimes simply slow down a decision that would be made anyway. At worst, it can confuse or stunt robust decision-making, rather than enhance it. In practice, this looks like a focus just on the final net present value (“So, is it positive or negative?”); an assumption that a more complex model means a more accurate one; and a premium placed on what can be measured over more complex costs and benefits that can only be described meaningfully in qualitative terms. This results in a CBA that the client and its key stakeholders either don’t understand or don’t believe in.
Of most concern, complex biases play an important role here
There are three broad causes of inaccurate or poor CBAs [2]:
The first cause is a more straightforward question of engaging the right experts, resources and quality assurance processes for the job. The second and third are more worrying as they suggest that policy makers can, knowingly or unknowingly, commission misleading quantitative analysis.
Some simple strategies can improve the use of quantitative analysis in decision-making
To make sure quantitative modelling truly contributes to better decision-making, we need to identify and counteract our complex biases. We focus on three useful strategies to do this:
Acknowledge the otherwise unspoken political drivers
CBAs suggest we can bring a ‘science’ to decision-making, but ultimately they are politically and value driven. At Nous we appreciate this and have an upfront discussion with our client to understand the motivations behind the CBA and what the end goal is (a CBA simply being one ‘means’ to realise this). It is important for those who commission the work to consider questions such as: What do you assume the outcome will be? What do key stakeholders have at stake? How will key decision makers receive an unexpected finding?
Through articulation of the motivations behind the project, we can make sure a full CBA will be of most value. For example, if key decision makers will only accept one of the options, it may be more helpful to conduct a detailed quantitative analysis of this option to understand better the key implementation challenges and success factors. Where a CBA is warranted, an upfront discussion allows us to put appropriate governance mechanisms in place to ensure underlying motivations do not unjustifiably drive the research in a certain direction.
Encourage the client and stakeholders to get their hands dirty
A model is always an approximation of reality via a theory of change about how the real world works, and a quantitative ‘model’ of this theory of change using data inputs and assumptions. For a CBA to provide useful input into decision-making, stakeholders must have confidence in what it measures and how.
To achieve this, we actively engage clients and, where possible, stakeholders in the design of the quantitative model. Initially, this means collectively making some clear decisions about the boundaries of what can and cannot be modelled and what are reasonable assumptions to make. Similarly, stakeholders need to be engaged in the important trade-offs between precision and accuracy – it is tempting to make CBAs more detailed and complex (i.e. capture the real world situation more accurately) but this brings more assumptions and ambiguity into the analysis (i.e. creates a less precise estimate).
This process leverages the strengths of policy makers. They may struggle to pick out inconsistencies in a complex quantitative model, but can readily identify an unreasonable assumption about how policy works in practice; or whether stakeholders will be comfortable with the quantification of non-market benefits. It is an iterative process. Once data has been collected, we retest the design of the model with the client to make sure it maximises inference from reliable inputs and limits the reliance on assumptions.
The client then has a CBA that they and their stakeholders have confidence in and that can truly challenge received wisdom. A model that has been through much iteration and that stakeholders support is much more valuable than a highly complex one that has been developed at arm’s length of those who must implement its results.
Build in checks to make sure we aren’t simply seeing what we hope to be true
The application of the behavioural economic insights of policy makers can also be useful. These approaches argue that poor investments are not simply an outcome of rational decision-making in uncertain contexts, as traditional economics would suggest. They are, rather, a consequence of flawed decision-making that is obscured by cognitive bias [3]. Namely, we can overstate our abilities and the degree of control we have over events. To address this, decision makers should seek an ‘outside view’ when assessing the merits of a project to counter their intuitive or ‘inside view’ [4].
There are formal approaches and tools to improve the reliability of forecasts. For example, seek out analysis of comparable projects, their CBAs and subsequent performance to see the unconscious biases of other policy makers (e.g. on average the comparable projects came out at 20% over expected costs, so we should consider applying that inflation to the output of our own CBA). A simpler approach would be to require the modeller to provide clear guidance notes on the limits of the CBA to inform decision-making. This would include their assessment of how well the model approximates the policy theory of change, the quality of inputs available and implications of the key assumptions made.
With these tools policy makers can challenge their own cognitive biases, make analysts more accountable for their forecasts, and so better balance optimism with realism.
[1] Flyvbjerg, B. ‘Survival of the unfittest: why the worst infrastructure gets built’ Oxford Review of Economic Policy, Vol. 25, No. 3, 2009
[2] Ibid.
[3] Based on the seminal work of Lovallo, D. & Kahneman, D. ‘Delusions of Success’ Harvard Business Review, July 2003.
[4] Ibid.