Map vs Territory: Why Polished Plans Feel True and Why That's Dangerous
A polished plan feels true because it's coherent. But coherence is a property of the map, not the territory — and the gap between them is where every surprise, failure, and 'nobody saw it coming' moment lives.
The spreadsheet is immaculate. Revenue projections in one column, cost assumptions in another, net margins cascading neatly into a five-year forecast. The model accounts for seasonality, customer churn, and market growth rates. It’s been reviewed by three people. It looks right. It feels right. And that feeling — the satisfying click of internal coherence — is precisely the danger.
The model isn’t reality. It’s a compression of reality — a set of assumptions, simplifications, and omissions packaged into a format that your brain can process. Every number in the spreadsheet is a claim about the future that strips away most of the complexity of the actual world it represents. The useful question is not whether the model is wrong. All models are wrong. The question is whether you know how it’s wrong and what it’s leaving out.
The research
Alfred Korzybski, a Polish-American philosopher, coined the phrase “the map is not the territory” in his 1933 work Science and Sanity. His core argument was that all human knowledge is abstraction — a simplified representation of a reality that is richer, messier, and more complex than any model can capture. The map is useful precisely because it omits detail. A map that reproduced reality at full resolution would be as useless as the territory itself. But the very simplifications that make a map useful also make it misleading, because the things it leaves out don’t announce their absence.
Daniel Oppenheimer, at Princeton, published a review in Trends in Cognitive Sciences in 2008 exploring the psychology of cognitive fluency — the ease with which information is processed. His central finding was that fluent information is judged as more true, more trustworthy, and more intelligent than disfluent information, independent of its actual accuracy. A clearly written report feels more credible than a poorly formatted one with identical content. A polished presentation feels more convincing than a rough one, even when the underlying analysis is the same. Fluency often gets mistaken for truth, and that mistake is common enough to matter.
This finding has direct implications for how we treat models and plans. A well-constructed spreadsheet, a clean Gantt chart, a visually coherent strategy deck — these are fluent representations. They process easily. They feel right. And that feeling of rightness is a property of the representation, not the underlying reality. The model’s coherence tells you that someone has thought carefully about how to present the information. It tells you nothing about whether the assumptions behind it are accurate.
Nassim Nicholas Taleb, in The Black Swan (2007), argued that our models of the world systematically underestimate the probability and impact of rare, high-consequence events — precisely because such events are, by definition, outside the boundaries of the map. Models are built from historical data and known variables. They capture the patterns of normal conditions. What they cannot capture are the conditions under which those patterns break — the discontinuities, regime changes, and novel combinations that produce the outcomes with the greatest impact.
The mechanism
John Sterman, a systems dynamics researcher at MIT, articulated the epistemological problem clearly in a 2002 paper in the System Dynamics Review: “All models are wrong.” This is not a criticism. It’s a structural observation. A model is a set of choices about what to include and what to exclude. Every model has a boundary — a line beyond which it doesn’t operate. Within that boundary, it can be remarkably accurate. Outside it, it provides no information at all, and it doesn’t tell you when you’ve crossed the line.
The danger is that models hide their own limitations. A spreadsheet doesn’t display a warning when the market conditions that underpin its assumptions have shifted. A strategic plan doesn’t flag when the competitive landscape it was built for has changed. The model continues to produce outputs that look precise and coherent, even when the inputs that generated them are no longer valid. The user — experiencing the fluency of a well-built model — mistakes continued output for continued accuracy.
Kahneman and Gary Klein explored this in a joint 2009 paper in American Psychologist, examining the conditions under which expert intuition is reliable. They agreed that expertise produces valid intuitions only in environments that are sufficiently regular (patterns repeat) and where the expert receives timely feedback (they learn whether their judgements are correct). In chaotic, novel, or slow-feedback environments, expert intuitions — and the models built from them — degrade rapidly. The expert doesn’t know their model has failed, because the environment has moved outside the conditions in which the model was calibrated.
The practical consequence is that the confidence you have in a model should be inversely proportional to how far into the future it projects and how novel the conditions it addresses. A model of next quarter’s revenue, based on existing contracts and historical patterns, can be reasonably trusted. A model of revenue three years from now, in a market that’s being reshaped by technology, regulation, and competitive dynamics, is a story dressed up as mathematics.
The more confident a model makes you feel, the more important it is to ask what it leaves out. Coherence is a property of the map. Accuracy is a property of the fit between the map and the territory.
The practical implications
Name two things your model simplifies or ignores. This exercise is designed to make the simplifications visible, so they can be monitored. If your revenue model assumes stable customer acquisition cost, name that assumption. If your project plan assumes full team availability, name that assumption. The named assumptions become the watch list — the points at which the map is most likely to diverge from the territory.
Ask when the model was last tested against reality. Models are built at a point in time, using data and assumptions that were valid at that point. The longer a model runs without being compared to actual outcomes, the more likely it has drifted from the reality it represents. A quarterly comparison between projected and actual results isn’t just good governance — it’s the feedback loop that keeps the model useful. Without it, the model becomes a historical artefact that everyone treats as current.
Treat precision as a warning sign, not a confidence signal. A forecast that says “revenue will be £4,237,891 in Q3” is not more accurate than one that says “revenue will be between £3.8M and £4.5M.” It’s more precise — and precision without accuracy is worse than uselessness, because it creates false confidence. When a model produces a single, precise number, ask about the range. If the modeller can’t provide one, the precision is aesthetic, not epistemic.
The bigger picture
We live in an age of extraordinarily sophisticated models. Financial models, climate models, epidemiological models, machine learning models — each capable of processing more data and producing more detailed outputs than at any point in history. The temptation is to mistake this sophistication for truth. A model that processes ten million data points feels more trustworthy than one that processes ten. But the fundamental epistemological limitation remains: the model is still an abstraction, and the territory is still more complex than any abstraction can capture.
The most dangerous moment in any decision process is the moment the model becomes the reality in the minds of the people using it. When the spreadsheet’s projections become “what will happen” rather than “what might happen under these assumptions,” the map has replaced the territory. From that point forward, every decision is being made in a fictional world that merely resembles the real one.
The discipline of asking “what does this model leave out?” is not scepticism for its own sake. It’s the maintenance required to keep a useful tool useful — to preserve the gap between the map and the territory in the minds of the people who are navigating by it. Close that gap, and you’re not navigating anymore. You’re dreaming with your eyes open.
References
- Korzybski, A. (1933). Science and Sanity: An Introduction to Non-Aristotelian Systems and General Semantics. Institute of General Semantics.
- Oppenheimer, D. M. (2008). The secret life of fluency. Trends in Cognitive Sciences, 12(6), 237–241.
- Taleb, N. N. (2007). The Black Swan: The Impact of the Highly Improbable. Random House.
- Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to disagree. American Psychologist, 64(6), 515–526.
- Sterman, J. D. (2002). All models are wrong: Reflections on becoming a systems scientist. System Dynamics Review, 18(4), 501–531.