Non-experts typically better at forecasting future than experts
There is a bitter truth for economists, as well as professionals in other areas. Non-experts are typically better at forecasting future trends than experts.
It has been argued that economists should be historians rather than meteorologists and many years before the recent economic crash, John Kenneth Galbraith, the late Harvard economist, joked: "The only function of economic forecasting is to make astrology look respectable."
Justin Wolfers, an economist, wrote last Friday in the New York Times on the May US jobs report which showed job creation falling to a net 38,000 in the month:
It’s possible that the economy is slowing significantly — that Friday’s jobs report is the canary in the coal mine. Perhaps employment is slowing because of election-related anxiety, or Fed-induced fears of higher interest rates, or concerns about the world economy. Maybe the recovery has run its course.
However, forecasting is hard to avoid despite most of it being bullshit.
Last week The Washington Post wrote that during an 8-day span last April, 602 pundits appeared on the Fox News, MSNBC and CNN cable networks in the US.
Pundits are seldom held accountable for their predictions but the clever ones know how to make bold statements with enough of caveats to avoid ever being proved wrong.
Philip Tetlock of the University of Pennsylvania and the Wharton School, is known for his research on forecasting and in his 2005 book 'Expert Political Judgment: How Good Is It? How Can We Know?' he reported on a study where he had picked 284 people who made their living “commenting or offering advice on political and economic trends,” and who made 82,361 forecasts over a 20-year period.
The study showed that the average expert was found to be only slightly more accurate than a dart-throwing chimpanzee. Many experts would have done better if they had made random guesses. Prof Tetlock wrote in the Economist's 'The World in 2014' on who are good at forecasts. The Greek poet Archilochus had observed about 2,700 years ago: “The fox knows many things, but the hedgehog knows one big thing.” Isaiah Berlin in his famous 1953 essay “The Fox and the Hedgehog” contrasted hedgehogs that “relate everything to a single, central vision” with foxes who “pursue many ends connected...if at all, only in some de facto way.” This was an illustration of specialists and generalists at work.
In his most recent book, 'Superforecasting: The Art and Science of Prediction,' published in 2015 and written with Canadian journalist Dan Gardner, Prof Tetlock and his team focused on how forecasters can improve their accuracy. Tetlock got the cooperation of the US intelligence agency, Intelligence Advanced Research Projects Activity (IARPA), between 2011 and 2015.
Tetlock's team comprised more than 2,000 curious, non-expert volunteers under the banner of the Good Judgement Project. Working in teams and individually the volunteers were asked to forecast the likelihood of various events of the sort intelligence analysts try to predict every day ("Will Saudi Arabia agree to OPEC production cuts by the end of 2014" was one example).
The results were surprising from over 20,000 forecasts. The Good Judgement Project's predictions were 60% more accurate in the first year than those of a control group of intelligence analysts while results in the second year were even better, with an almost 80% improvement over the control group.
Two percent of the participants stood out from the group as spectacularly more prescient. They included Bill Flack, a retired US Department of Agriculture employee from Nebraska.
The book describes the cognitive traps that most people fall victim to, all the time. A common forecasting mistake that superforecasters never make involves substituting easy questions for difficult ones.
Prof Tetlock wrote that the superforecasters are naturally aware of the tricks the mind can play to make itself “certain” of something, as the human brain is wont to do. They’re also ready to change their forecast when new evidence emerges, without any fear of damaging one’s ego. To be sure, the superforecasters were found to be generally more knowledgeable than the general population, and they have higher-than-average IQs. Still, they were neither that much more knowledgeable nor intelligent than forecasters not of the “super” variety. Instead, they were just more apt to keep testing ideas, remaining in what Tetlock calls “perpetual beta” mode. They are careful but not spineless or indecisive, sins for any self-respecting leader in today’s culture.
“The humility required for good judgment is not self-doubt — the sense that you are untalented, unintelligent or unworthy,” Tetlock writes in the book. “It is intellectual humility. It is a recognition that reality is profoundly complex, that seeing things clearly is a constant struggle, when it can be done at all, and that human judgment must therefore be riddled with mistakes.”
The superforecasters performed about 30% better than intelligence community analysts with access to classified information. The very best of the superforecasters were even able to beat prediction markets, where traders bet real money on futures contracts based on the same questions, by as much as 40%.
Prof Tetlock rebuked Paul Krugman of the New York Times for his May 2016 blog post 'Send in the Clowns' on two Trump tax advisers.
Implicit Krugman claim here: l’m way better forecaster than these bozos. Maybe but self-scoring is suspect evidence https://t.co/DHe21kKeYx
— Philip E. Tetlock (@PTetlock) May 13, 2016
Pic on top: The federal government's National Oceanic and Atmospheric Administration (NOAA) made a monkey out of Dr. James Hansimian, the National Center for Public Policy Research's trained hurricane-forecasting chimp, by producing a more accurate hurricane forecast in 2010. See video
Videos:
Nate Silver on the Art and Science of Prediction