Belief Perseverance: Why Your Brain Defends Positions Instead of Updating Them

There are two responses to information that challenges your view: explain it away, or let it change what you think. The first feels powerful. The second is the only one that makes you smarter.

8 min read · for the tool Update, Don't Defend

A colleague sends you an article that directly contradicts a strategy you’ve been advocating for the past quarter. The data is solid. The source is credible. The methodology is sound. You read it, and within thirty seconds, you’ve found the flaw — a difference in market conditions, a questionable sample size, a conclusion that’s “technically correct but misses the nuance.” You share your rebuttal with the colleague. You feel sharp. You feel rigorous. You have not updated your view by a single percentage point.

This is a sophisticated defensive habit performing exactly as designed. When information threatens an existing belief, the cognitive system doesn’t evaluate the information objectively. It evaluates it as a threat — and deploys the full analytical machinery not to understand it, but to neutralise it.

The research

Charles Lord, Lee Ross, and Mark Lepper published one of the most important studies in the psychology of belief in 1979, in the Journal of Personality and Social Psychology. They presented participants who held strong views on capital punishment with two studies — one supporting the death penalty’s deterrent effect, one refuting it. After reading both studies, participants didn’t moderate their views toward a balanced position. They polarised further. Each side judged the confirming study as well-designed and the disconfirming study as methodologically flawed. Identical research designs were evaluated as rigorous or sloppy depending entirely on whether the conclusion matched the reader’s prior belief.

Craig Anderson, Mark Lepper, and Lee Ross followed up in 1980 with a study on belief perseverance, published in the same journal. They gave participants fabricated evidence supporting a specific theory, then told them the evidence was completely fictitious — debriefed them fully. Even after the debriefing, participants continued to hold the belief the fabricated evidence had supported. Once a causal explanation had been constructed — even from false data — the explanation persisted independently of the data. The belief had been detached from its evidential foundation and was now self-sustaining.

Dan Kahan and colleagues, at Yale, published a striking finding in Nature Climate Change in 2012. They found that higher scientific literacy and numeracy did not reduce partisan polarisation on climate change — they increased it. More scientifically literate individuals were better at interpreting data in ways that supported their pre-existing political identity. The analytical tools that should have enabled more accurate assessment were instead deployed in service of more sophisticated defence. Intelligence and education don’t solve the updating problem. They can make it worse.

The mechanism

Julia Galef, in The Scout Mindset (2021), articulated the distinction between two cognitive orientations: the soldier mindset, which treats beliefs as positions to defend, and the scout mindset, which treats beliefs as maps to be updated. The soldier responds to contradictory evidence by asking “how can I defeat this?” The scout responds by asking “what can I learn from this?” The soldier feels certainty. The scout feels curiosity. Both responses are automatic and feel equally natural — the difference is trained, not innate.

Thomas Bayes, in his posthumously published 1763 essay in the Philosophical Transactions of the Royal Society, laid the mathematical foundation for rational belief updating. Bayesian reasoning prescribes a specific process: start with a prior probability (your current confidence), observe new evidence, calculate the likelihood of that evidence under competing hypotheses, and produce a posterior probability (your updated confidence). The mathematics are precise. The human psychology that’s supposed to implement them is not.

The gap between Bayesian updating and actual human reasoning is the space where belief perseverance lives. Bayesian updating requires treating evidence symmetrically — evidence that supports your view and evidence that challenges it should both shift your confidence proportionally. Human reasoning is asymmetric: confirming evidence is processed fluently and shifts confidence upward, while disconfirming evidence is scrutinised, criticised, and either neutralised or ignored. The net effect is that confidence only moves in one direction — toward wherever you started.

The identity dimension makes this worse. When a belief is connected to your sense of who you are — your professional expertise, your political identity, your values — updating the belief doesn’t just change what you think. It changes who you are. Kahan’s finding makes sense in this light: for scientifically literate partisans, the belief about climate change isn’t just a factual assessment. It’s a marker of tribal identity. Updating the belief would mean updating the identity — a cost that no amount of evidence can justify through the lens of identity-preservation.

The question “what would I need to see to change my mind completely?” is the most honest test of whether you’re holding a belief or the belief is holding you. If the answer is “nothing,” what you have isn’t a conclusion. It’s an identity.

The practical implications

Make the update explicit before responding. When contradictory evidence arrives, the instinct is to immediately engage with it — find the flaw, construct the rebuttal, explain why it doesn’t apply. The intervention is to pause and write, before any response: “How should this change my view? Even slightly.” Forcing the update into words disrupts the automatic defence response. Even a small update — “this shifts me from 85% to 80%” — keeps the door open for evidence to accumulate.

Track the direction of your updates over time. If every piece of new information shifts your confidence in the same direction — always toward your original position — that’s diagnostic. Genuine updating responds to the evidence, which in any uncertain domain will push in both directions over time. One-directional updating is evidence that the defence mechanism is winning, and the updating is cosmetic rather than real.

Use the “change my mind completely” test to identify identity-level beliefs. For most professional judgements, you should be able to articulate the specific evidence that would change your mind: “If customer retention drops below 70% after the change, I’d conclude the strategy was wrong.” If you genuinely cannot articulate what would change your mind, the belief has migrated from the domain of evidence to the domain of identity — and identity-level beliefs are immune to evidence by design. Recognising this doesn’t automatically fix it, but it tells you that the analytical tools you’re applying to defend the belief aren’t serving the function you think they are.

The bigger picture

The ability to update beliefs in response to evidence is, in theory, the distinguishing feature of rational decision-making. In practice, it’s the exception rather than the rule. The default human response to contradictory information is not to update but to defend — and the defence is so sophisticated, so fluent, and so satisfying that it’s almost indistinguishable from genuine analysis.

The organisations that learn fastest are not those with the smartest people. They’re those with cultures that make updating safe — where changing your mind is treated as evidence of rigour rather than weakness, where leaders model public updating, and where the social cost of admitting you were wrong is lower than the organisational cost of persisting in error.

At the individual level, the practice is simpler but no less difficult: when new information arrives, notice the urge to defend, and redirect it. Not suppress it — redirect it. The same analytical energy that would have gone into finding the flaw in the new evidence can instead be directed at finding what the new evidence reveals about the limits of your current view. The question changes from “how is this wrong?” to “how should this change what I think?” Same cognitive effort. Fundamentally different outcome.

References

  1. Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109.
  2. Anderson, C. A., Lepper, M. R., & Ross, L. (1980). Perseverance of social theories: The role of explanation in the persistence of discredited information. Journal of Personality and Social Psychology, 39(6), 1037–1049.
  3. Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2(10), 732–735.
  4. Galef, J. (2021). The Scout Mindset: Why Some People See Things Clearly and Others Don't. Portfolio.
  5. Bayes, T. (1763). An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society of London, 53, 370–418.