The Netherlands holds the record for the longest continuous economic expansion in modern times – almost 26 years ending in 2008. Australia – despite what both political parties have been saying for decades about the damage they each do – might well break that record sometime early next year.
But who would you trust to predict whether it would happen and whether, if it did, how much longer it might last? The Reserve Bank is a possibility although in contrast the track record of academic, bank and other private sector economists is appalling. In most years they are lucky to get the trend, let alone the final numbers, right.
Yet there are people who might. They have been dubbed ‘superforecasters’ by Philip Tetlock the long term researcher on predictions and their accuracy. Tetlock (the blog has often referred to his work) was initially famous for his work Expert Political Judgment which found that there was a correlation between the confidence and profile of pundits and the likelihood of their predictions being wrong. Indeed, for the run of the mill high profile expert on TV the chances of their predictions being right were about the same as a chimpanzee throwing darts at a dart board.
After never-ending intelligence disasters (and getting weary repeating the mantra that ‘we might have been wrong on that’ but if it wasn’t for secrecy we could tell you about all our great successes) IARPA – the intelligence equivalent of DARPA – decided to conduct a tournament among selected groups to find out which was the most effective in making predictions. The tournament was cut short because Tetlock’s group, the Good Judgment Project, was so far in front of the other contenders that the competition had become pointless.
Now Tetlock, with Dan Gardner, has written Superforecasting which explains how they won and how they found a number of superforecasters. These superforecasters weren’t experienced analysts, exceptionally intelligent or uniquely qualified – but they did think in different and characteristic ways.
First, they understood and applied the Kahneman distinction between system I and system 2 thinking – instinctive and deliberative. One is good for avoiding sabre toothed tigers in long grass, the second for navigating modern societies.
Second, the scientific method which revolutionised medicine and other fields (eg randomised trials) and substituted evidence for assumptions, works in the prediction market. Allied with this is the necessity of some levels of numeracy – not necessarily sophisticated mathematics but rather numeracy.
Third, looking at problems from the outside rather than the inside, by looking at the context and what it tells you. For instance, if you wanted to estimate the chances of a particular family having a pet you don’t start by making assumptions about the family, you start by looking at the percentage of households which own pets which gives you the first indication of the probability of them having one. Then you work through other elements.
Fourth, having a philosophical approach characterised by caution, humility, nondeterministic thinking; a thinking style which is actively open-mind, intellectually curious, reflective and numerate; and, a forecasting method which is pragmatic, analytical, probabilistic, able to value and absorb diverse views, uses thoughtful updates (when the facts change,change your mind) and good intuitive psychology. They also have a commitment to getting better and a bit of grit.
Superforecasters also contains good discussions on the wisdom of crowds, group think, ideological biases and similar things. There are some fascinating, amusing and infuriating case studies. These include a detailed analysis of where the CIA went wrong in failing to predict the fall of Iran’s Shah; a riveting comparison of the killing of Osama Bin Laden as depicted in the film Zero Dark Thirty (mostly bullshit as you would imagine) and what Leon Panetta actually experienced without the benefit of the script James Gandolfini had to rely on when he played him in the film; the reliability of prediction markets; and, some pen portraits of the people who became superforecasters.
One of the best sections of the book is a leadership chapter – which provides a great template for people in the private and public sectors. It includes a comparison of the histories of the German and US armies up to World War II and what lessons people facing leadership dilemmas can learn. As Tetlock says, the German army may have been used for evil purposes, but it was a magnificent fighting organisation built on the staff foundations laid by Helmuth von Moltke, in the 19th century. The point of the comparison is that leaders must make decisions which ultimately rely on forecasts. In the case of the German Army the responsibility for decisions was devolved as much as possible because ‘the leader’ could never be all-knowing. Needless to say when a madman came along even the German Army couldn’t cope as they were pushed into mad decisions (invade Russia in winter anyone?) and their superiority in the field was ultimately thwarted by superior forces and logistics. Part of that thwarting was because Eisenhower, who inherited an army where people had very little discretion, understood that in modern warfare discretion was not just a product of command structures but also logistics and availability of supplies. Outcomes are more predictable when you have plenty of them.
Although, of course, one of the other things one needs in military situations is accurate information and non-hubristic assessments of capabilities and analyses of success factors. That’s why the US Army learnt the Eisenhower lessons too well and then failed to take into account some other fundamentals of forecasting in places like Vietnam, Iraq and Afghanistan.
Lots of people make forecasts every day – in finance, in bridge, in betting, in poker – but knowledge of how to make better forecasts is rare. Tetlock and Gardner outline many examples of how you can get better; how you can become a superforecaster; and, how you can start to ask the right questions on which to base forecasts. The only problem is that if you follow them closely you will have a much higher probability of prediction success but a much lower probability of a slot on TV current affairs programs.
Indeed, there are running jokes throughout the book about, Tom Friedman, that well-known maker of grand predictions and grand books about grand scale societal, political and economic events. Tetlock concludes that, unlike some TV friendly pundits the correctness (well actually incorrectness) of Tom’s are a bit hard to evaluate. But, says, Tetlock, Tom does ask good questions, a system which is a better guide to outcomes than his conclusions.