Incentive Asymmetry: Why Advice Gets Bolder When the Advisor Bears No Risk
The confidence of someone's advice tells you almost nothing about its quality. What tells you something is what happens to them if the advice is wrong.
A management consultant recommends a major organisational restructure. The recommendation is detailed, confident, and persuasive. The slides are beautiful. The logic is clean. What’s absent from the presentation — and from the conversation — is what happens to the consultant if the restructure fails. They’ll have moved on to the next engagement. Their fee was paid for the recommendation, not the outcome. If the restructure damages morale, loses key talent, and costs millions in productivity, the consultant experiences none of these consequences. Their incentive structure is perfectly asymmetric: they capture the upside of appearing bold and strategic while bearing none of the downside of being wrong.
This asymmetry doesn’t make the consultant dishonest. It makes their advice structurally unreliable in a specific way: it will be systematically bolder, more confident, and less attentive to downside risk than advice from someone whose own resources are at stake. The advisor isn’t lying. Their judgement is simply being shaped by incentives they may not even be aware of.
The research
Nassim Nicholas Taleb made the case for skin in the game as a decision filter in his 2018 book of the same name. His central argument was epistemic rather than moral: people who bear the consequences of their judgements develop better judgement over time, because reality provides corrective feedback. People who are insulated from consequences don’t receive this feedback, so their judgement doesn’t improve — and may actively degrade as their confidence grows without calibration. The quality of advice is therefore correlated with the advisor’s exposure to the outcomes of that advice.
Michael Jensen and William Meckling formalised the underlying problem in their foundational 1976 paper on agency theory in the Journal of Financial Economics. They demonstrated that whenever one person (the agent) makes decisions on behalf of another (the principal), the divergence in their incentives creates predictable distortions. The agent will tend to act in ways that serve their own interests — not necessarily through conscious self-serving behaviour, but through the subtle, often unconscious influence of incentive structures on perception and judgement. A real estate agent, for example, will recommend selling sooner at a lower price because their commission structure rewards transaction volume over price improvement.
George Loewenstein, Daylian Cain, and Sunita Sah explored a counterintuitive finding in a 2011 paper in the American Economic Review: disclosing conflicts of interest doesn’t solve the problem and can actually make it worse. When advisors disclosed their conflicts, they felt morally licensed to give even more biased advice — the disclosure had “warned” the recipient, so the advisor felt absolved. Meanwhile, recipients felt socially pressured to follow the advice despite the disclosure, because ignoring it would signal distrust. Transparency, in this case, amplified rather than corrected the incentive distortion.
The mechanism
The mechanism operates through what Max Bazerman and Don Moore, in Judgment in Managerial Decision Making (2013), call “motivated reasoning” — the unconscious tendency to reach conclusions that serve one’s interests while genuinely believing the reasoning is objective. A consultant doesn’t think “this restructure will fail, but I’ll recommend it because it generates fees.” The consultant genuinely believes the restructure is the right move — because their cognitive machinery has been shaped by an incentive structure that rewards recommending large, visible interventions over small, unglamorous ones.
Uri Gneezy and Aldo Rustichini demonstrated in a 2000 paper in The Quarterly Journal of Economics that incentive structures shape behaviour in non-linear ways. The relationship between risk exposure and decision quality isn’t simple — some skin in the game improves judgement, but the form of the exposure matters. Advisors who share proportional downside risk (they lose something if the advice is wrong) give more careful, more hedged, and more accurate recommendations than those who bear no risk. Advisors who bear excessive downside risk (their career depends on a single recommendation) become overly conservative. The optimal signal comes from advisors whose risk exposure is proportional — enough to sharpen their attention but not enough to paralyse their judgement.
Taleb’s framework identifies a specific pattern: in domains where feedback is slow and outcomes are ambiguous — strategy, policy, long-term investment — the separation between advisor and consequence is most dangerous. A consultant’s restructure recommendation won’t be evaluated for two years. By then, the causal link between the advice and the outcome is muddied by a hundred intervening variables. The advisor never receives the corrective feedback that would improve their future judgement. They remain confidently wrong indefinitely.
The useful question is not whether the advisor means well. It’s whether their judgement has been shaped by bearing the consequences of their previous recommendations — or by being insulated from them.
The practical implications
Map the asymmetry before weighing the advice. Two questions are sufficient: what does this person gain if I follow their recommendation? What do they lose if it goes wrong? If the gain is substantial (fees, status, a confirmed thesis) and the loss is zero (they move on, face no consequences, bear no cost), the asymmetry is high. This doesn’t invalidate the advice. It tells you to weight it less heavily and to seek additional input from sources with more symmetric exposure.
Watch for confidence that exceeds the advisor’s exposure. A person who has wagered their own resources on an outcome will naturally express appropriate uncertainty — because overconfidence has personal costs. A person with no skin in the game can afford to be maximally confident, because confidence is socially rewarded and inaccuracy is costless. When you encounter advice that is both very confident and very bold, check whether the advisor’s confidence is backed by their own risk exposure or merely by their rhetorical style.
Seek advisors who have lived with the consequences of similar advice. A consultant who has implemented restructures and stayed to manage the aftermath has different judgement than one who has only recommended them. A financial advisor who invests their own capital alongside their clients has different judgement than one who earns fees regardless of returns. The distinction is the quality of the feedback loop that has shaped their expertise, not personal trust. Skin in the game, accumulated over time, produces calibrated judgement. Insulation from consequences produces uncalibrated confidence.
The bigger picture
Modern economies are built on a vast infrastructure of separated risk — advisors, consultants, analysts, commentators, and experts who provide recommendations they will never have to live with. This separation is not always harmful. Specialisation requires it. You can’t expect your surgeon to share your medical risk or your lawyer to share your legal exposure. But recognising the separation is essential for calibrating how much weight their advice deserves.
The people whose advice has been most valuable across history have typically been those who bore consequences for being wrong. Taleb points to the ancient tradition of architects sleeping under their bridges — a literal alignment of advice and consequence that concentrated the mind wonderfully. The modern equivalent isn’t requiring consultants to sleep under their restructures. It’s building the habit of asking, before you act on anyone’s recommendation: what happens to you if this doesn’t work?
The answer won’t always disqualify the advice. But it will always sharpen your judgement about how to use it.
References
- Taleb, N. N. (2018). Skin in the Game: Hidden Asymmetries in Daily Life. Random House.
- Jensen, M. C., & Meckling, W. H. (1976). Theory of the firm: Managerial behavior, agency costs and ownership structure. Journal of Financial Economics, 3(4), 305–360.
- Loewenstein, G., Cain, D. M., & Sah, S. (2011). The limits of transparency: Pitfalls and potential of disclosing conflicts of interest. American Economic Review, 101(3), 423–428.
- Gneezy, U., & Rustichini, A. (2000). Pay enough or don't pay at all. The Quarterly Journal of Economics, 115(3), 791–810.
- Bazerman, M. H., & Moore, D. A. (2013). Judgment in Managerial Decision Making (8th ed.). Wiley.