Psychology

Confirmation Bias: Why We Find What We're Looking For

Peter Wason's 2-4-6 task (1960, QJEP) and selection task (1968, QJEP) demonstrated that people preferentially seek confirming rather than disconfirming evidence for their hypotheses, even when falsification would be more informative. This positive test strategy operates in market research, due diligence, and strategic planning.

6 min read
Quick Answer

What is confirmation bias?

  • Confirmation bias is the tendency to seek, favor, and recall information that confirms pre-existing beliefs while underweighting disconfirming evidence
  • Wason's 2-4-6 task (1960) showed that people test hypotheses by generating confirming examples rather than attempting falsification, even when falsification would be more informative
  • The selection task (1968) showed the same positive test strategy in conditional reasoning: most people chose to confirm the rule rather than turn over the card that could actually disprove it
  • In organizations, this operates as due diligence by advocates, market research that validates rather than tests strategy, and performance reviews that seek confirming evidence for existing assessments

The 2-4-6 Task

Peter Wason published “On the Failure to Eliminate Hypotheses in a Conceptual Task” in the Quarterly Journal of Experimental Psychology in 1960 (Vol. 12, pp. 129–140). The task is deceptively simple.

Participants were told that the sequence “2-4-6” conformed to a rule, and their task was to discover the rule by generating additional three-number sequences. The experimenter would tell them whether each proposed sequence conformed to the rule. When they were confident they had discovered the rule, they announced it.

The actual rule was simply “any three ascending numbers.” But participants consistently formed hypotheses like “even numbers ascending by two” or “even numbers in arithmetic progression,” then tested those hypotheses by proposing sequences that would fit their hypothesis: “6-8-10,” “14-16-18,” “100-102-104.” Each proposal confirmed the hypothesis, and participants grew increasingly confident in incorrect rules.

What almost no one did spontaneously was test a sequence that would falsify the hypothesis. For example, “3-5-7” (which would have shown that odd numbers also fit the rule), or “1-5-20” (which would have shown that non-equal-interval sequences also fit). Falsifying tests are more informative than confirming tests, but people default to the strategy of generating examples consistent with their current hypothesis. This is what Wason called a “positive test strategy”: testing by seeking confirmation rather than refutation.

The Selection Task

Wason introduced the selection task in a 1966 paper and gave it its canonical treatment in “Reasoning About a Rule” in the Quarterly Journal of Experimental Psychology in 1968 (Vol. 20, No. 3, pp. 273–281). The task uses a conditional rule, “If there is a vowel on one side of the card, then there is an even number on the other side,” and four cards showing A, K, 4, and 7.

To test whether the rule holds, which cards should you turn over? The correct answer is A (a vowel card, where you need to check if even is on the back) and 7 (an odd number card, where you need to check if a vowel is on the back, which would violate the rule). Most participants correctly choose A but also choose 4, the even number card, which cannot violate the rule regardless of what’s on the other side and provides no information about whether the rule holds.

The 4-card selection is the confirming choice: if the rule is “vowels go with even numbers,” then the even-number card feels like it should be checked. But checking it produces only a confirming or null result: it can tell you that the rule holds in one more case, but not that the rule is false. The 7 card, the disconfirming choice, is the one with actual logical power, and it is the card most frequently left unselected.

Professional Manifestations

Four practices with evidence behind them

(1) Pre-mortem analysis: ask ‘imagine this decision has failed; what went wrong?’ to force generation of falsifying scenarios. (2) Adversarial collaboration: assign a team the explicit role of building the strongest case against the proposed decision. (3) Red team/blue team structures: two teams independently analyzing the same question with opposite starting positions. (4) Decision journaling: record assumptions before outcomes are known, which prevents hindsight bias from retroactively confirming the decision was obviously right.

Try alfred_

Try alfred_ free for 30 days

AI-powered leverage for people who bill for their time. Triage email, manage your calendar, and stay on top of everything.

Get started free

Frequently Asked Questions

Does expertise reduce confirmation bias?

The evidence on expertise and confirmation bias is discouraging: in most domains studied, experts show confirmation bias at levels comparable to novices. The primary exception is domains where professionals have been trained in systematic falsification: experimental scientists, formal logicians, and some clinical diagnosticians trained in differential diagnosis show somewhat reduced confirmation bias in their trained domains. But expertise in one domain does not transfer: a scientist with strong falsification habits in their research field may still show robust confirmation bias in financial or strategic decisions outside their trained domain. The bias appears to be a default feature of hypothesis testing that training in specific contexts can partially override.

How does confirmation bias interact with group decision-making?

Groups amplify confirmation bias through two mechanisms. First, social pressure toward consensus causes disconfirming information to be downweighted or suppressed. The member who raises the disconfirming evidence faces the groupthink dynamics that penalize dissent. Second, if the group shares a hypothesis (which cohesive groups often do), their collective information search is more comprehensively focused on confirmation. Each member's individual positive test bias reinforces the same directional skew. The corrective in group settings is the same as individually: structurally required consideration of disconfirming hypotheses, devil's advocate roles, or pre-mortems. But the social dynamics make it harder to implement than in individual decision processes.

What structural practices most reliably reduce confirmation bias in strategic decisions?

Four practices with evidence behind them: (1) Pre-mortem analysis: ask 'imagine this decision has been implemented and has failed; what went wrong?' to force generation of falsifying scenarios before the decision is made. (2) Adversarial collaboration: assign a person or team the explicit role of building the strongest possible case against the proposed decision, rather than relying on someone to volunteer dissent. (3) Red team/blue team structures: create two teams that independently analyze the same question with opposite starting positions. (4) Decision journaling: record the assumptions underlying a decision before outcomes are known, which prevents hindsight bias from retroactively confirming that the decision was obviously right or obviously wrong.