Psychology

The Overconfidence Effect: Why Experts Are Wrong More Than They Think

The project timeline estimate delivered with certainty that proves wrong by 40%. The market forecast stated confidently that misses by a quarter. The negotiation position held firmly until it collapses. In each case, the problem is not the error itself (prediction under uncertainty is inherently imprecise) but the miscalibration between expressed confidence and actual accuracy.

Feb 19, 20266 min read
Quick Answer

What is the overconfidence effect?

  • The overconfidence effect is the systematic tendency for people's expressed confidence to exceed their actual accuracy. Fischhoff, Slovic and Lichtenstein (1977) showed that when participants set 90% confidence intervals (ranges they were 90% sure contained the true answer) those intervals captured the true value only about 60% of the time. The effect has three forms: overprecision (too-narrow confidence intervals), overplacement (believing you're better than others), and overestimation (believing you'll perform better than you do).

Calibration and the 90% Confidence Interval Study

Baruch Fischhoff, Paul Slovic, and Sarah Lichtenstein published "Knowing with certainty: The appropriateness of extreme confidence" in the Journal of Experimental Psychology: Human Perception and Performance in 1977 (3(4), 552–564). The methodology used a confidence interval task: participants answered general knowledge questions by providing a range (a lower bound and an upper bound) they were 90% confident contained the true answer.

A well-calibrated person setting 90% confidence intervals should have their true answers fall inside those intervals 90% of the time. In the research, the actual hit rate was approximately 60%, meaning participants' intervals were far too narrow for their stated confidence level. The same participants who thought they were right 90% of the time were actually right only about 60% of the time when using their own stated confidence as the benchmark.

90% stated → ~60% actual

When participants set 90% confidence intervals (ranges they were 90% sure contained the true answer), those intervals captured the true value approximately 60% of the time. The gap between stated confidence and actual accuracy is the overconfidence effect, systematic, not random.

Source: Fischhoff, B., Slovic, P. & Lichtenstein, S. (1977). Journal of Experimental Psychology: Human Perception and Performance, 3(4), 552–564.

The finding has been replicated in hundreds of studies across domains. Lichtenstein, Fischhoff, and Phillips (1982) reviewed the calibration literature and confirmed that overconfidence is one of the most robust findings in judgment and decision-making research, present in students and experts, across cultures, and across a wide range of domains.

Three Types of Overconfidence

Moore and Healy (2008, "The trouble with overconfidence," Psychological Review, 115(2), 502–517) systematized three distinct forms of overconfidence that previous researchers had conflated:

  • Overprecision. Expressing more certainty than accuracy warrants: the 90%-stated/60%-actual gap in the Fischhoff study. This is the most robust and consistent form, present even when overplacement and overestimation are not. Confidence intervals are systematically too narrow.
  • Overplacement. Believing you are better than others, the "Lake Wobegon effect" where most drivers rate themselves above average. This form of overconfidence is domain-dependent and can reverse: for difficult tasks, people tend to underplace (believe they are worse than average), because the difficulty of the task makes others' performance seem relatively lower as well.
  • Overestimation. Believing you will perform better than you actually do. This form also depends on task difficulty and is not universal: for hard tasks, people may underestimate their performance. The planning fallacy, underestimating how long tasks will take, is a form of overestimation about task completion speed.

Try alfred_

See what this looks like in practice

alfred_ applies these principles automatically — triaging your inbox, drafting replies, extracting tasks, and delivering a Daily Brief every morning. Theory becomes system. $24.99/month. 30-day free trial.

Try alfred_ free

Professional Consequences

  • Forecasting and planning. Project timelines, market forecasts, sales projections, and strategic plans are all subject to overprecision. Schedules that appear to have no slack often have no room for the variance that reality produces because the underlying estimate ignored the systematic tendency to underestimate uncertainty. Reference class forecasting, which anchors estimates to the distribution of outcomes for similar historical projects, partially corrects overprecision by forcing exposure to the actual distribution rather than the inside view.
  • Expert overconfidence. A counterintuitive finding from the calibration research is that expertise does not reliably improve calibration and may sometimes worsen it. Experts develop more internally coherent narratives about their domain, which can increase expressed confidence without proportionally increasing accuracy. Tetlock's research on expert political forecasting found that domain experts' confidence in their predictions exceeded their accuracy by approximately the same amount as non-experts'. Expertise improves accuracy; it does not necessarily improve the gap between accuracy and expressed confidence.
  • Negotiation and commitment. Overconfident negotiators enter with positions that leave less room for compromise than the actual distribution of possible outcomes justifies. Overconfident project leads commit to deliverables with no variance buffer. The cost is asymmetric: overconfident commitments feel fine when they work, and very costly when they fail. Since they fail more often than the overconfident forecaster expected, the average outcome is worse than a well-calibrated estimate would produce.

Frequently Asked Questions

Does overconfidence decrease with experience and feedback?

Training and feedback can improve calibration, but the improvement is domain-specific and requires systematic feedback about accuracy. Meteorologists and professional bridge players show better calibration than the general population, but only for their specific domains where they receive rapid, unambiguous feedback about the accuracy of their predictions. In domains with delayed, ambiguous, or absent feedback (which describes most executive and strategic decision-making) experience does not reliably improve calibration. The key variable is feedback quality: clear, timely, unambiguous information about prediction accuracy, received repeatedly over time, is required for calibration learning to occur.

Is there a situation where overconfidence is beneficial?

Some research suggests moderate overconfidence can be adaptive in specific contexts. Johnson and Fowler (2011) proposed that overconfidence could be evolutionarily adaptive in competitive situations where overclaiming resources or status has positive expected value. In negotiation contexts, moderate overconfidence may communicate credibility and raise first offers in ways that improve outcomes. Psychological research on positive illusions (Taylor & Brown, 1988) suggested that slightly inflated self-assessments are associated with mental health and resilience. The important qualifier is 'moderate': the benefits of slight overconfidence are domain-specific and quickly reverse when overconfidence leads to poor preparation, failed commitments, or competitive disadvantage from miscalibrated resource allocation.

What structural practices reduce overconfidence in organizational decisions?

Four evidence-based practices: (1) Pre-mortem analysis: asking 'assuming this project failed, why did it fail?' before commitment, which forces consideration of failure modes that overconfidence suppresses. (2) Reference class forecasting: anchoring estimates to the distribution of outcomes for comparable historical cases rather than relying on inside-view reasoning about the specific case. (3) Confidence interval widening: a direct calibration technique where decision-makers explicitly widen their initial intervals by a factor of 1.5 to 2 to partially correct for systematic narrowing. (4) Adversarial collaboration: having a designated critic explicitly construct the best case against the prevailing plan, which surfaces the scenario-space that overconfidence tends to exclude.

Try alfred_

Track what you said you'd do.

Overconfidence is sustained by poor feedback loops. alfred_ makes your commitments explicit and tracks follow-through, giving you the calibration data to see where your confidence and your completion rate actually line up. $24.99/month. 30-day free trial.

Try alfred_ Free