The Pre-Mortem: Why Assuming Failure Produces More Honest Planning
Forward planning is optimism in disguise. The pre-mortem exploits a simple temporal trick: imagining that the project has already failed, then explaining why, unlocks a level of honest risk assessment that prospective analysis consistently misses.
The project has been approved. The team is assembled. The plan is detailed, the milestones are set, and the leadership is aligned. Someone asks, “What could go wrong?” A few polite risks are mentioned — market conditions, resource constraints, the usual. Someone adds them to a risk register. The meeting moves on. Six months later, the project fails for a reason that everyone in the room had privately suspected but nobody said aloud.
This pattern is so common it barely registers as a failure of process. But it is one. The question “what could go wrong?” is structurally weak — it asks people to challenge a plan that has already been endorsed, in a room full of people who endorsed it, in a social context where raising doubts feels like disloyalty. The pre-mortem changes the question, and that change shifts how people answer it.
The research
Gary Klein, a psychologist specialising in naturalistic decision-making, introduced the pre-mortem technique in a 2007 article in Harvard Business Review. The method is deceptively simple: before a project begins, the team imagines that it’s six months (or a year) in the future and the project has failed. Each person then independently writes down the reasons for the failure. The reasons are shared and discussed. Preventable failures are flagged for action.
Klein’s earlier research, published in Sources of Power (1998), established the cognitive basis for the technique. He spent decades studying how experts make decisions in high-stakes environments — firefighters, military commanders, neonatal intensive care nurses — and found that their advantage wasn’t superior analytical thinking. It was superior pattern recognition combined with mental simulation. Experts didn’t generate multiple options and compare them. They imagined their first plausible course of action playing out, looking specifically for ways it might fail. If the simulation flagged a critical flaw, they discarded the option and tried the next one.
The pre-mortem extends this expert simulation process to teams. It uses a cognitive finding that Deborah Mitchell, J. Edward Russo, and Nancy Pennington documented in a 1989 paper in the Journal of Behavioral Decision Making: prospective hindsight — imagining that an event has already occurred — increases the ability to generate explanations for that event by roughly 30% compared to prospective prediction. Asking “it failed — why?” produces more reasons, more specific reasons, and more diverse reasons than asking “might it fail — why?”
The temporal shift is critical. When you ask people to predict failure, they’re working against the gravitational pull of the plan’s internal coherence. The plan has already been rationalised. It already makes sense. Challenging it requires overcoming the narrative the team has collectively constructed. But when you ask people to explain a failure that has “already happened,” the narrative constraint disappears. The failure is a given. The task is merely to explain it. This reframing releases a flood of causal reasoning that the predictive frame suppresses.
The mechanism
Irving Janis identified the social dynamics that make standard risk assessment so ineffective in his 1982 work on groupthink. When a group has committed to a decision, several pressures converge to suppress dissent: the desire for consensus, the social cost of being seen as negative, the authority gradient that discourages junior members from challenging senior ones, and the shared investment in the plan’s success. Asking “what could go wrong?” in this environment is asking people to bear all of these social costs simultaneously.
The pre-mortem neutralises these dynamics by changing the social context of the contribution. Voicing a concern about a plan is dissent. Explaining why a (hypothetically) failed project failed is analysis. The same observation — “we didn’t have enough engineering capacity” — carries a completely different social charge depending on whether it’s offered as a criticism of the current plan or as an explanation of a fictional failure. In the pre-mortem frame, the person raising the concern isn’t opposing the team. They’re contributing to a shared exercise. The safety to speak up is built into the structure of the task.
Daniel Kahneman, in Thinking, Fast and Slow (2011), endorsed the pre-mortem as one of the few debiasing techniques he considers genuinely effective. Most interventions against overconfidence — warnings, training, checklists — produce modest effects because they fight the bias directly. The pre-mortem works because it sidesteps the bias entirely. It doesn’t ask you to be less optimistic about the plan. It asks you to explain a failure — a task your brain handles with a completely different, and far more honest, set of cognitive tools.
The pre-mortem doesn’t ask whether the plan will fail. It asks why it already did. The difference in framing produces a difference in honesty that no amount of careful risk analysis can match.
The practical implications
Three specific failure scenarios is the minimum for a useful pre-mortem. Klein’s research found that the technique works best when participants generate reasons independently before sharing — this prevents anchoring on the first suggestion and ensures diverse failure modes are captured. The scenarios must be concrete: “the client changed requirements in week four and we had no change management process” is usable. “Things didn’t go to plan” is not.
The technique is most valuable at the point of highest confidence. The natural instinct is to conduct risk analysis early, when uncertainty is acknowledged, and to skip it once the plan feels solid. But the pre-mortem is specifically designed for the moment when the plan feels bulletproof — because that’s when the team’s optimism bias is strongest and when the social pressure against dissent is highest. Running a pre-mortem when everyone is confident is uncomfortable. That discomfort is precisely the signal that it’s needed.
The conversion step is what separates a pre-mortem from pessimism. Generating failure scenarios without action items is just structured anxiety. Each scenario needs to be evaluated on two dimensions: likelihood and preventability. High-likelihood, preventable failures become immediate action items. High-likelihood, unpreventable failures become contingency plans. Low-likelihood failures are noted and monitored. The pre-mortem’s output isn’t a list of worries — it’s a risk-removal checklist.
The bigger picture
Planning processes are built on a foundational assumption: that imagining success is the best way to achieve it. Visualise the goal. Map the milestones. Rally the team. This approach produces plans that are coherent, motivating, and systematically blind to the conditions under which they’ll fail.
The pre-mortem doesn’t replace forward planning. It complements it by activating a cognitive mode that forward planning cannot access. The brain that imagines future success and the brain that explains past failure are, in functional terms, different instruments — one wired for narrative coherence and motivation, the other for pattern recognition and causal analysis. Using only one of them produces a plan that feels complete but isn’t.
The organisations that build pre-mortems into their decision processes aren’t more pessimistic than those that don’t. They’re more honest. They’ve institutionalised a moment of structured doubt at exactly the point where doubt is most suppressed and most needed. The few minutes it takes to ask “assume this failed — why?” is the cheapest risk management investment any team can make.
References
- Klein, G. (2007). Performing a project premortem. Harvard Business Review, 85(9), 18–19.
- Klein, G. (1998). Sources of Power: How People Make Decisions. MIT Press.
- Mitchell, D. J., Russo, J. E., & Pennington, N. (1989). Back to the future: Temporal perspective in the explanation of events. Journal of Behavioral Decision Making, 2(1), 25–38.
- Janis, I. L. (1982). Groupthink: Psychological Studies of Policy Decisions and Fiascoes. Houghton Mifflin.
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.