Psychology

Confirmation Bias: Why We Find What We're Looking For

Market research designed to validate a strategy finds reasons to move forward. Due diligence conducted by the team that recommended the acquisition finds the evidence supporting it. Strategic reviews ask whether the current plan is on track rather than whether the current plan is the right plan. In each case, the process of seeking information is shaped by a hypothesis that the information is expected to confirm, and this systematically distorts what evidence is sought, found, and weighted.

Feb 19, 20266 min read
Quick Answer

What is confirmation bias?

  • Confirmation bias is the tendency to seek, favor, and recall information that confirms pre-existing beliefs while underweighting disconfirming evidence
  • Wason's 2-4-6 task (1960) showed that people test hypotheses by generating confirming examples rather than attempting falsification, even when falsification would be more informative
  • The selection task (1968) showed the same positive test strategy in conditional reasoning: most people chose to confirm the rule rather than turn over the card that could actually disprove it
  • In organizations, this operates as due diligence by advocates, market research that validates rather than tests strategy, and performance reviews that seek confirming evidence for existing assessments

The corrective is structural, not motivational: pre-mortems, adversarial collaboration, and red-team analysis work because they make disconfirmation procedurally required, not just intellectually encouraged.

The 2-4-6 Task

Peter Wason published "On the Failure to Eliminate Hypotheses in a Conceptual Task" in the Quarterly Journal of Experimental Psychology in 1960 (Vol. 12, pp. 129–140). The task is deceptively simple.

Participants were told that the sequence "2-4-6" conformed to a rule, and their task was to discover the rule by generating additional three-number sequences. The experimenter would tell them whether each proposed sequence conformed to the rule. When they were confident they had discovered the rule, they announced it.

The actual rule was simply "any three ascending numbers." But participants consistently formed hypotheses like "even numbers ascending by two" or "even numbers in arithmetic progression," then tested those hypotheses by proposing sequences that would fit their hypothesis: "6-8-10," "14-16-18," "100-102-104." Each proposal confirmed the hypothesis, and participants grew increasingly confident in incorrect rules.

What almost no one did spontaneously was test a sequence that would falsify the hypothesis. For example, "3-5-7" (which would have shown that odd numbers also fit the rule), or "1-5-20" (which would have shown that non-equal-interval sequences also fit). Falsifying tests are more informative than confirming tests, but people default to the strategy of generating examples consistent with their current hypothesis. This is what Wason called a "positive test strategy": testing by seeking confirmation rather than refutation.

The Selection Task

Wason introduced the selection task in a 1966 paper and gave it its canonical treatment in "Reasoning About a Rule" in the Quarterly Journal of Experimental Psychology in 1968 (Vol. 20, No. 3, pp. 273–281). The task uses a conditional rule, "If there is a vowel on one side of the card, then there is an even number on the other side," and four cards showing A, K, 4, and 7.

To test whether the rule holds, which cards should you turn over? The correct answer is A (a vowel card, where you need to check if even is on the back) and 7 (an odd number card, where you need to check if a vowel is on the back, which would violate the rule). Most participants correctly choose A but also choose 4, the even number card, which cannot violate the rule regardless of what's on the other side and provides no information about whether the rule holds.

The 4-card selection is the confirming choice: if the rule is "vowels go with even numbers," then the even-number card feels like it should be checked. But checking it produces only a confirming or null result: it can tell you that the rule holds in one more case, but not that the rule is false. The 7 card, the disconfirming choice, is the one with actual logical power, and it is the card most frequently left unselected.

Try alfred_

See what this looks like in practice

alfred_ applies these principles automatically — triaging your inbox, drafting replies, extracting tasks, and delivering a Daily Brief every morning. Theory becomes system. $24.99/month. 30-day free trial.

Try alfred_ free

Professional Manifestations

  • Due diligence. Acquisition due diligence conducted by the team that proposed the acquisition is structurally biased: they have formed a hypothesis (this is a good acquisition) and are now gathering evidence. Without explicit processes that require them to generate and pursue falsifying hypotheses (reasons the acquisition will fail, evidence that the synergy assumptions are wrong), the process will disproportionately surface confirming evidence.
  • Market research and strategy validation. Research questions that ask "what do customers value about our product?" are confirmation-seeking. Research questions that ask "what would make customers switch away from our product?" are falsification-seeking. The latter produces more strategically useful information, but organizations disproportionately commission the former, especially after a strategy has been committed to.
  • Performance review and hiring. Interview questions that probe the hypothesis "this candidate is strong in X" confirm the hypothesis if the candidate answers well. Questions that probe alternative hypotheses, like "what would I see if this candidate struggles under pressure?", generate evidence that can genuinely update the assessment. Structured interviews with adversarial questions alongside positive ones reduce but don't eliminate the positive test bias.
Four practices with evidence behind them
(1) Pre-mortem analysis: ask 'imagine this decision has failed; what went wrong?' to force generation of falsifying scenarios. (2) Adversarial collaboration: assign a team the explicit role of building the strongest case against the proposed decision. (3) Red team/blue team structures: two teams independently analyzing the same question with opposite starting positions. (4) Decision journaling: record assumptions before outcomes are known, which prevents hindsight bias from retroactively confirming the decision was obviously right.

Frequently Asked Questions

Does expertise reduce confirmation bias?

The evidence on expertise and confirmation bias is discouraging: in most domains studied, experts show confirmation bias at levels comparable to novices. The primary exception is domains where professionals have been trained in systematic falsification: experimental scientists, formal logicians, and some clinical diagnosticians trained in differential diagnosis show somewhat reduced confirmation bias in their trained domains. But expertise in one domain does not transfer: a scientist with strong falsification habits in their research field may still show robust confirmation bias in financial or strategic decisions outside their trained domain. The bias appears to be a default feature of hypothesis testing that training in specific contexts can partially override.

How does confirmation bias interact with group decision-making?

Groups amplify confirmation bias through two mechanisms. First, social pressure toward consensus causes disconfirming information to be downweighted or suppressed. The member who raises the disconfirming evidence faces the groupthink dynamics that penalize dissent. Second, if the group shares a hypothesis (which cohesive groups often do), their collective information search is more comprehensively focused on confirmation. Each member's individual positive test bias reinforces the same directional skew. The corrective in group settings is the same as individually: structurally required consideration of disconfirming hypotheses, devil's advocate roles, or pre-mortems. But the social dynamics make it harder to implement than in individual decision processes.

What structural practices most reliably reduce confirmation bias in strategic decisions?

Four practices with evidence behind them: (1) Pre-mortem analysis: ask 'imagine this decision has been implemented and has failed; what went wrong?' to force generation of falsifying scenarios before the decision is made. (2) Adversarial collaboration: assign a person or team the explicit role of building the strongest possible case against the proposed decision, rather than relying on someone to volunteer dissent. (3) Red team/blue team structures: create two teams that independently analyze the same question with opposite starting positions. (4) Decision journaling: record the assumptions underlying a decision before outcomes are known, which prevents hindsight bias from retroactively confirming that the decision was obviously right or obviously wrong.

Try alfred_

Surface what you're not looking at.

alfred_ surfaces emails and threads you haven't engaged with, not just the ones relevant to your current priorities. The unanswered message from the skeptical stakeholder and the customer complaint that doesn't fit the positive narrative are exactly what confirmation bias would leave in the inbox. $24.99/month.

Try alfred_ Free