Outcome Bias: Why Judging Decisions by Results Teaches You to Be Lucky, Not Skilled
The result tells you what happened. It doesn't tell you whether the decision was good. Confusing the two is the most common way people learn the wrong lessons from their own experience.
You hired someone against the conventional wisdom — junior, no industry experience, rough around the edges. Everyone questioned the decision. Eighteen months later, the hire is your top performer. You feel vindicated. You tell the story at leadership events. “Trust your gut,” you say. “The credentials don’t matter as much as you think.”
Now consider: was the decision actually good? Or did the outcome happen to be good, and you’re retroactively crediting the process that led to it? If the same hire had floundered — same decision, same reasoning, same information at the time — you’d be telling a very different story. The lesson became a story about experience and due diligence. The decision was identical. Only the outcome changed. And the outcome is the one thing the decision-maker didn’t control.
The research
Jonathan Baron and John Hershey formalised this phenomenon in a 1988 paper in the Journal of Personality and Social Psychology. They called it outcome bias — the tendency to evaluate the quality of a decision based on the outcome it produced rather than the quality of the reasoning at the time. In their experiments, participants were given identical decision scenarios and asked to evaluate the decision-maker’s competence. The only variable was the outcome: good or bad. Decisions that produced good outcomes were rated as significantly more competent, more thoughtful, and more justified than identical decisions that produced bad outcomes.
The effect was reliable even when participants were explicitly told that the outcome was due to factors outside the decision-maker’s control. Knowing that luck was involved didn’t eliminate the tendency to credit or blame the person who made the call. The outcome overwrote the process in the evaluator’s mind.
Annie Duke explored the practical consequences of this bias in Thinking in Bets (2018), drawing on her experience as a professional poker player. In poker, the distinction between decision quality and outcome quality is inescapable: you can play a hand perfectly and lose to a bad beat, or play terribly and win by luck. The game forces players to separate process from result, because the feedback cycle is fast enough to reveal the difference. In professional and personal life, the feedback cycle is slower, the variables are more complex, and the temptation to conflate “it worked” with “it was a good decision” is far stronger.
Duke coined the term “resulting” for this conflation — the automatic inference from outcome to decision quality. Resulting feels like learning. It feels like you’re extracting signal from experience. But when the signal is contaminated by randomness, what you’re actually learning is which patterns of behaviour happened to coincide with good outcomes, regardless of whether they caused them.
The mechanism
Nassim Nicholas Taleb, in Fooled by Randomness (2001), articulated the fundamental problem: in any domain where outcomes are influenced by factors beyond the decision-maker’s control, outcomes are a poor proxy for decision quality. A trader who makes reckless bets in a rising market looks like a genius — until the market turns. A surgeon who operates on high-risk patients will have a higher mortality rate than one who selects low-risk cases, regardless of skill. The outcome reflects the combined effect of skill, luck, and circumstances. Extracting the skill component requires separating what the decision-maker controlled from what they didn’t.
James March, in The Ambiguities of Experience (2010), extended this analysis to organisational learning. He argued that organisations routinely draw the wrong lessons from their experience because they evaluate strategies by results rather than by the quality of the reasoning that produced them. A strategy that succeeded in favourable market conditions is replicated. A strategy that failed in unfavourable conditions is abandoned. Neither lesson is warranted if the conditions — which were outside the strategist’s control — drove the outcome more than the strategy itself.
The mechanism that makes this so persistent is narrative coherence. The human brain constructs stories from events, and stories have causes and effects. When the outcome is good, the brain identifies the decisions that preceded it as causes. When the outcome is bad, the brain identifies those same decisions as mistakes. This narrative reconstruction happens automatically and feels like understanding. But it’s a pattern imposed on data, not extracted from it — and the pattern changes entirely depending on the ending.
This connects to Kahneman and Tversky’s broader work on how people evaluate outcomes under uncertainty. Loss aversion means that bad outcomes sting disproportionately, which amplifies the resulting effect in the negative direction: a bad outcome doesn’t just make the decision look wrong — it makes it feel reckless, irresponsible, or foolish, regardless of the reasoning that supported it.
Separating process from outcome protects against letting bad results excuse poor reasoning or good results teach that good results don’t teach you the wrong lesson.
The practical implications
The two-column exercise makes the separation concrete. Writing “what I controlled (process)” in one column and “what I didn’t control (outcome factors)” in the other forces a granular decomposition of the result. A successful product launch might have been driven 40% by your marketing strategy (process) and 60% by a competitor’s unexpected exit from the market (luck). Both contributed to the result. Only one should inform your future decisions.
Evaluate decisions at the time they were made, not after the outcome. The question isn’t “did this work?” It’s “given what I knew at the time, was this the best reasoning I could have applied?” A decision to invest in a market that subsequently crashed isn’t a bad decision if the evidence at the time supported it. A decision to hire someone who turned out to be exceptional isn’t a good decision if the evidence at the time was insufficient and you got lucky. Anchoring evaluation to the information available at the decision point, rather than at the outcome point, is the only way to learn from experience accurately.
Build a track record of process, not just results. Over time, good processes produce better results than bad processes — but the correlation only becomes visible across many decisions. A single outcome is meaningless as feedback; ten outcomes begin to reveal a pattern; a hundred outcomes separate skill from luck with reasonable confidence. Keeping a record of your reasoning, confidence level, and information base at the time of each decision creates the dataset for evaluating your actual decision quality — something results alone can never provide.
The bigger picture
The cultures of most organisations are outcome-based. Bonuses are tied to results. Promotions are tied to successes. Failures are punished regardless of the reasoning behind them. This incentive structure produces a predictable response: people improve for outcomes they can take credit for and avoid decisions with downside risk, regardless of expected value. The result is systematic risk aversion, short-term thinking, and a reluctance to make any decision where the downside is visible — even when the expected value is strongly positive.
Shifting from outcome evaluation to process evaluation requires a fundamental cultural change — one that most organisations endorse in principle and undermine in practice. Saying “we value good thinking even when results are bad” while firing everyone whose projects fail sends a clear signal about which standard actually applies.
The individual practice of separating process from result is, in this context, an act of self-defence. It protects your ability to learn from experience by filtering out the noise of randomness. It prevents good luck from teaching you to be reckless and bad luck from teaching you to be timid. And it builds, over time, the rarest and most valuable form of professional competence: the ability to trust your own judgement based on evidence rather than on the last outcome you happened to observe.
References
- Baron, J., & Hershey, J. C. (1988). Outcome bias in decision evaluation. Journal of Personality and Social Psychology, 54(4), 569–579.
- Duke, A. (2018). Thinking in Bets: Making Smarter Decisions When You Don't Have All the Facts. Portfolio.
- Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–292.
- Taleb, N. N. (2001). Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets. Random House.
- March, J. G. (2010). The Ambiguities of Experience. Cornell University Press.