http://flyvbjerg.plan.aau.dk/0512DRWBPUBL.pdfThis paper by Bent Flyvbjerg is essential reading for anyone interested in reforming public investment management systems. It summarises a body of work undertaken by Flyvberg and a research group on large infrastructure at Aalborg University (Denmark) and the results are striking.
• Significant cost overruns (in real terms) were seen across a sample of 258 major transportation projects, irrespective of country, continent or transport mode, and with no tendency to diminish. The average (real) cost overrun was:
o 9 out of 10 projects had a cost overrun
o 45% for rail
o 34% for bridges and tunnels
o 20% for roads
• From a sample of 208 rail and road projects:
o 9 out of 10 rail projects had overestimated traffic with actual passenger traffic being 51% lower on average than forecast
o Interestingly, traffic for roads tended to be under-estimated - on actual traffic being on average 9.5% higher than forecast
Many specific examples of projects with cost overruns and benefit shortfalls are given and three explanations for systematic bias explored:
• Technical explanations – imperfect forecasting, inadequate data, inexperienced forecaster, etc.
• Psychological explanations – inherent over-optimism on the part of planners
• Political-economic explanations – planners and promoters deliberately and strategically over-estimating benefits and underestimating costs
The first explanation does not hold up because one would expect an even distribution of errors either side of zero, but this is not the case: errors are systematically biased in one direction and do not change over time (in spite of significant ‘improvements’ in approaches to forecasting)
The second explanation is dismissed, because it is not conceivable that planners would not recognise their systematic bias over an extended period of time (the sample covers 70 years) and begin to correct for it: but the sample does not show errors reducing over time.
Flyvbjerg favours the third explanation, i.e., that project promoters and planners systematically ‘cook the books’ in order to make projects look better than they are and obtain funding approval. While this is difficult to prove, two studies are quoted where public officials, planners and consultants were interviewed on a confidential basis. One study of the UK showed that ‘strong interests and strong incentives exist at the project approval stage to present projects as favourably as possible, that is, with the benefits emphasised and costs and risks deemphasised.’ A study of the USA found that ‘In case after case, planners, engineers, and economists...had had to “revise” their forecasts many times because they failed to satisfy their superiors. The forecasts had to be cooked in order to produce numbers that were dramatic enough to gain federal support...’ This will probably sound familiar to anyone who has been involved on the coalface of project appraisal.
On the technical side of improving forecasts of costs, benefits and time for construction, Flyvbjerg recommends the use of ‘reference-class forecasting’. Essentially this means building forecasts by cross-referencing to the experience of a similar set of projects, rather than making forecasts on the specifics of a particular project. It is suggested that this would tend to produce far more accurate forecasts by bypassing ‘cognitive and political biases such as optimism bias and strategic misrepresentation...’ This approach does not require planners and forecasters ‘to make scenarios, imagine events, or gauge their own and others’ levels of ability and control, so they cannot get all these things wrong.’
To attack the deeper problem ‘institutional change with a focus on transparency and accountability is necessary’. Among the proposals to achieve this are:
• National governments should avoid offering discretionary capital grants to local institutions because of the perverse incentives these create. Unconditional capital grants are to be preferred with local authorities deciding how to use these should be used.
• Forecasts should be subject to independent peer review
• Forecasts should be benchmarked against other comparable forecasts
• Forecasts, peer reviews and benchmarkings should be made public
Perhaps more controversially, Flyvberg proposes that ‘Malpractice in planning should be taken as seriously as it is in other professions’ with penalties like exclusion from professional bodies and even criminal proceedings. Firstly, deliberate malpractice in forecasting may be difficult to prove and may emerge only after a substantial period of time: Secondly, in some countries with weaker institutions, particularly professional and legal, malpractice cases may be brought maliciously.
My own personal experience is that some progress could be achieved by ensuring that consultancy firms involved in feasibility studies are not allowed to be involved in later stages of project development and implementation. While such firms are usually excluded from supervising the implementation of projects, involvement in detailed design and preparation of tender documentation is often possible (and, indeed, often forms part of the same contract). These activities are often more lucrative than the feasibility study work, thus inviting a positive gloss on feasibility findings. Of course, these changes would not remove the systemic biases that Flyvbjerg has identified.