Political “expert” predictions are only slightly better than guesses and they perform far worse than statistical models.
Oddly enough, “experts” were better at predicting events outside their area of expertise.
In the mid-1980s, the psychologist Philip Tetlock noticed exactly this pattern among political experts of the day. Determined to make them put their proverbial money where their mouths were, Tetlock designed a remarkable test that was to unfold over twenty years. To begin with, he convinced 284 political experts to make nearly a hundred predictions each about a variety of possible future events, ranging from the outcomes of specific elections to the likelihood that two nations would engage in armed conflict with each other. For each of these predictions, Tetlock insisted that the experts specify which of two outcomes they expected and also assign a probability to their prediction. He did so in a way that confident predictions scored more points when correct, but also lost more points when mistaken. With those predictions in hand, he then sat back and waited for the events themselves to play out. Twenty years later, he published his results, and what he found was striking: Although the experts performed slightly better than random guessing, they did not perform as well as even a minimally sophisticated statistical model. Even more surprisingly, the experts did slightly better when operating outside their area of expertise than within it.
Okay, that’s politics. What about technology? 80% of the time they’re wrong.
Around the same time that Tetlock was beginning his experiment, in fact, a management scientist named Steven Schnaars tried to quantify the accuracy of technology-trend predictions by combing through a large collection of books, magazines, and industry reports, and recording hundreds of predictions that had been made during the 1970s. He concluded that roughly 80 percent of all predictions were wrong, whether they were made by experts or not.
The real problem of prediction, in other words, is not that we are universally good or bad at it, but rather that we are bad at distinguishing predictions that we can make reliably from those that we can’t.
Join 25K+ readers. Get a free weekly update via email here.
I want to subscribe!