False Precision: Why Specific Numbers Feel More Credible Than They Are
Your brain treats specific numbers as signals of expertise. A detailed projection feels more trustworthy than an honest range — even when the specificity is entirely invented. The distinction between precision and accuracy is the gap where false confidence hides.
The financial analyst presents the five-year projection: revenue of £12.4 million in year three, growing to £18.7 million by year five, with an EBITDA margin of 22.3%. The numbers are crisp. The decimal places suggest rigour. The board nods. Somewhere in the back of everyone’s mind is the faint awareness that predicting revenue to three significant figures five years from now is absurd — but the feeling of precision overwhelms the knowledge of uncertainty. The numbers on the slide feel like facts. They’re actually guesses wearing the costume of mathematics.
Precision and accuracy are different things. Precision is the level of detail in a measurement or prediction: “23.7%” is more precise than “around 20-25%.” Accuracy is how close that measurement or prediction is to reality. A forecast can be extremely precise and wildly inaccurate. And in nearly every domain where humans make predictions — revenue, timelines, market sizes, project costs — precision consistently outperforms its accuracy, because the cognitive effect of specificity is to suppress the very uncertainty that the prediction should be communicating.
The research
Ilan Yaniv and Dean Foster explored the tension between precision and informativeness in a 1995 paper in the Journal of Experimental Psychology: General. They found that people faced a systematic trade-off when evaluating forecasts: precise forecasts (narrow point estimates) were perceived as more informative and more expert, while broader ranges — though more likely to capture the true value — were perceived as less useful and less competent. The forecaster who said “between 10% and 30%” was judged as less capable than the one who said “23.7%,” even when the range was more honest and more likely to be correct.
This creates a perverse incentive. Forecasters who communicate their genuine uncertainty — by providing ranges rather than point estimates — are penalised by their audience. Forecasters who express false precision — by offering specific numbers that imply a certainty they don’t possess — are rewarded with credibility. The audience’s preference for precision drives the supply of precision, regardless of accuracy.
David Budescu and Ning Du, in a 2007 paper in Management Science, demonstrated that investors showed coherence in how they interpreted precise probability estimates but poor calibration in how they translated those estimates into decisions. Participants treated precise probabilities as more reliable than vague ones, allocated resources more aggressively on precise predictions, and showed less hedging behaviour — even when the precise predictions were generated by the same models that produced the vague ones. The precision signal overrode the uncertainty signal.
Philip Tetlock and Dan Gardner, in Superforecasting (2015), found that the best forecasters used precision strategically. They distinguished between situations where fine-grained probability estimates (65% vs 70%) reflected genuine differences in their evidence base, and situations where precision was cosmetic — where the evidence couldn’t support a distinction finer than “roughly likely” or “somewhat unlikely.” The discipline wasn’t in achieving maximum precision but in matching the precision of the estimate to the quality of the underlying evidence.
The mechanism
Daniel Goldstein and David Rothschild, in a 2014 paper in Judgment and Decision Making, found that people often have poor intuitive understanding of probability distributions. When told there’s a 70% chance of an outcome, people don’t naturally construct the corresponding 30% counterfactual scenario. They process the 70% as if it were closer to certainty than it is. Point estimates — single numbers without ranges — amplify this error by collapsing a distribution of possible outcomes into a single value, hiding the variance entirely.
The psychological mechanism is anchoring combined with cognitive fluency. A precise number — £18.7 million — provides a strong anchor. It’s easy to process, easy to remember, and easy to build plans around. A range — £12M to £25M — is disfluent: it forces the recipient to hold two numbers in mind, to reckon with the spread between them, and to plan for multiple scenarios rather than one. The brain prefers the precise number not because it’s more useful but because it’s less cognitively taxing.
Gerd Gigerenzer and colleagues explored how the public interprets probabilistic forecasts in a 2005 paper in Risk Analysis. When told there was a “30% chance of rain tomorrow,” many participants interpreted this as meaning it would rain for 30% of the day, or over 30% of the geographic area — rather than understanding it as a probability statement about whether rain would occur at all. The finding illustrates a broader point: even when uncertainty is explicitly communicated, people struggle to use it correctly. False precision eliminates this struggle by removing the uncertainty from view — which feels helpful but is actively harmful for decision quality.
A range is less satisfying than a point estimate. It is also more honest, more useful, and more likely to contain the actual outcome. The satisfaction of precision is precisely what makes it dangerous.
The practical implications
Always ask for the range, not just the point estimate. When someone presents a number — a forecast, a budget, a timeline — ask: “What’s the range?” If they can’t provide one, the precision is invented. If they can, the width of the range tells you the true state of the evidence. A range of £17M to £20M means the estimate is well-grounded. A range of £10M to £30M means the point estimate of £18.7M was selecting one value from a distribution wide enough to accommodate very different realities.
Distinguish between measured precision and estimated precision. A laboratory thermometer reading 23.7°C is precise and accurate — the instrument was designed to deliver that resolution. A five-year revenue forecast of £23.7M is precise but not accurate — the model that generated it has error margins far larger than the decimal place implies. The test is whether the precision comes from a measurement instrument with known resolution, or from a model or estimate where the decimal places are cosmetic.
Use precision in your own communication to signal the quality of your evidence. When your evidence is strong, be precise: “I’m 78% confident” or “the timeline is 11–13 weeks.” When your evidence is weak, communicate that through wider ranges: “I’m somewhere between 50% and 70%” or “the timeline could be anywhere from 2 to 6 months.” Matching your precision to your evidence is one of the simplest ways to maintain calibration and build trust with the people who rely on your judgement.
The bigger picture
Professional culture rewards precision. The analyst who presents clean numbers looks competent. The one who presents ranges and caveats looks uncertain. The consultant who delivers a specific recommendation looks decisive. The one who says “it depends on several factors I can’t predict” looks unhelpful. These social incentives systematically push communication toward false precision and away from honest uncertainty.
The cost is paid downstream, when plans built on precise-but-inaccurate numbers encounter the reality those numbers concealed. The project that was budgeted at £2.3M costs £4.1M. The product that was projected to reach 50,000 users reaches 12,000. The timeline that was set at 14 weeks extends to 9 months. In each case, the original estimate was precise enough to feel like a commitment, and the recipients built their plans accordingly. The precision wasn’t just wrong — it was systematically misleading about the degree of certainty that existed.
The antidote isn’t to abandon precision. It’s to demand that precision be earned — backed by evidence of sufficient quality to justify the level of detail being claimed. A number without a range is a number without context. And a number without context is a trap disguised as an answer.
References
- Yaniv, I., & Foster, D. P. (1995). Graininess of judgment under uncertainty: An accuracy-informativeness trade-off. Journal of Experimental Psychology: General, 124(4), 424–432.
- Budescu, D. V., & Du, N. (2007). Coherence and consistency of investors' probability judgments. Management Science, 53(11), 1731–1744.
- Goldstein, D. G., & Rothschild, D. (2014). Lay understanding of probability distributions. Judgment and Decision Making, 9(1), 1–14.
- Tetlock, P. E., & Gardner, D. (2015). Superforecasting: The Art and Science of Prediction. Crown.
- Gigerenzer, G., Hertwig, R., van den Broek, E., Meder, B., & Martignon, L. (2005). 'A 30% chance of rain tomorrow': How does the public understand probabilistic weather forecasts? Risk Analysis, 25(3), 623–629.