When it comes to making forecasts — whether it’s predicting the outcome of an election or determining whether a marriage will last — what good is intuition? Can our gut instincts guide us to correct outcomes, or are they too unreliable to be useful in a world ruled by data?
People can use intuition to make remarkably accurate predictions, social scientists have shown. In an experiment published earlier this year, for example, psychologists found that call-center employees speaking with registered voters a week before an election could foresee with surprising accuracy which ones would flake out on their plans to vote. “It’s surprising to me because it’s such a short exchange for callers to be able to make useful inferences about whether respondents are actually going to do what they say,” the lead researcher, Todd Rogers, told me when the study was published. He cited other studies where ordinary people showed extraordinary abilities to intuit others’ personality traits, sexual orientation andracial attitudes.
At the same time, unconscious judgments can be contaminated with biases. Psychologist Daniel Kahneman laid out many of the perils of gut instinct in his 2011 best-seller “Thinking, Fast and Slow.” Among them are anchoring (being overly influenced by the first information you receive), hindsight bias (wrongly believing past events were predictable or predetermined), and the availability heuristic(giving too much weight to what you already know and not enough to what you know you need to look up).
For insights into the value of intuition, I turned to individuals who have demonstrated an unusually keen ability to make judgments about the future. The University of Pennsylvania psychologist Philip Tetlock calls these people “superforecasters,” a concept that emerged from an experiment he co-created called the Good Judgment Project. What I learned from Tetlock’s research and from talking to experiment participants is that people need intuition to make all kinds of judgments, but not all intuition is equal. Some people are more talented at using it — and facets of intuition can be honed and improved.
Superforecasters earn their title by scoring in the top 2 percent of participants in a series of geopolitical-prediction tournaments organized by called IARPA (Intelligence Advanced Research Projects Activity), a government research agency that works to improve U.S. intelligence capabilities. In 2011, the tournament’s first year, contestants offered predictions about whether Serbia would be granted European Union candidacy by Dec. 31 of that year, and whether the London gold market fixing price would exceed $1,850 on Sept. 30.
These kinds of problems require subjective judgments, said Warren Hatch, a financial strategist and superforecaster. He uses data, but he has to make subjective, intuitive decisions about which data to consider and how much weight each data set deserves. It’s not something Hatch did well at first, he said. But when he decided to put more effort into updating his forecasts and learning from other forecasters, he reached the elite 2 percent.
A key technique in superforecasting is a system of reasoning called Bayesian analysis. A forecaster will chose a starting probability, called a prior, and then refine that figure — increasing or reducing it — by systematically taking additional information into account. If Hatch were trying to forecast the results of the next election, as a prior he might choose the frequency with which a candidate from one party wins following a two-term incumbent of the same party. Others might start with a figure from polling data. There’s no single right prior or path forward, but some prove better than others.
Qualifying as a superforecaster is not about getting things right or wrong. The judging system rewards those who tend to pick the best odds. If an event indeed happens within the specified time frame, contestants get more credit for predicting it with 90 percent odds than they would have with 60 percent — but they lose more credit with the 90 percent call if the event fails to happen. Even an invertebrate can have lucky streak, but real skill should show up over months and many forecasts.
Kahneman’s “Thinking, Fast and Slow” is a favorite book among superforecasters. Its message isn’t to ignore intuition, but to look out for mental traps. Consider one of the book’s first examples of thinking fast: If a bat and ball together cost $1.10, and the bat costs a dollar more than the ball, what’s the price of the ball? If 10 cents popped into your head, your intuition tripped you up.
But some people may hear another intuitive voice nagging that there’s something wrong. It couldn’t be that easy, and it isn’t. If the bat is a dollar and the ball 10 cents, then the bat costs 90 cents more than the ball — not a dollar. The correct answer is five cents.
The good news is that for some people, intuition is trainable. One participant in the IARPA contest, insurance researcher and former math major Mary Pat Campbell, says mathematicians see intuition as the unconscious mastery of information. This applies to competitive forecasters, too. They need intuition to decide which statistics to consider, and how often to update information without being distracted by noise.
This intuitive component may be what keeps forecasting in the human realm. Judgment project leader Tetlock devotes several pages in his popular book on superforecasting to arguing why human prognosticators won’t be overtaken by machines any time soon. But Hatch, the superforecaster, sees the scientific study of human judgment as something up and coming. “It’s in the air,” he said. Maybe intuition is like any swimming or cooking or many other skills — practicing certain ways of thinking can change what becomes second nature.