How the AIM Framework Explains Why Bayes' Theorem
How the AIM Framework Explains Why Bayes' Theorem Does Not Work in Practice
Bayes' Theorem represents one of the most elegant mathematical tools for rational belief updating, yet its practical application consistently falls short of theoretical promise. The AIM Motivational Framework—distinguishing Appetites (A), Intrinsic Motivation (I), and Mimetic Desire (M) as three neural sources of human choice—provides a neuroscientifically grounded explanation for this persistent gap between Bayesian ideals and human reality. The central insight is that Bayesian inference assumes agents can objectively determine prior probabilities and update beliefs purely through evidence, but AIM reveals that human decision-making operates through three distinct neural pathways that integrate pre-consciously, making the rational assignment of priors and unbiased evidence evaluation psychologically impossible in practice.^1
The AIM Framework: Three Neural Sources of Motivation
The AIM Framework synthesizes contemporary neuroscience into a taxonomy of human choice by distinguishing three fundamental motivational sources that converge in the brain's common-currency valuation system. Appetites originate in hypothalamic homeostatic monitoring systems, generating state-dependent motivations like hunger, thirst, and fatigue that rise with deprivation and collapse with satiety. Intrinsic Motivation emerges from dopaminergic midbrain circuits supporting curiosity-driven exploration, mastery development, and autonomous choice—processes valued for their own sake and persistent even without external rewards or social visibility. Mimetic Desire operates through mirror neuron systems that detect others' goal-directed actions, transmitting value via social observation and creating wanting because models pursue specific targets.^3^4
Critically, these three sources integrate through a common-currency system in the ventromedial prefrontal cortex and ventral striatum, which computes a unified subjective value signal used for action selection. This integration is continuous rather than sequential—all three input streams modulate the valuation signal simultaneously whenever active, with normalized weights $w_A$, $w_I$, and $w_M$ representing their relative influence on any given choice. The architectural reality that motivation arises from distinct neural pathways converging pre-consciously has profound implications for Bayesian reasoning, which presumes agents can introspect and rationally determine their beliefs.^1
The Prior Probability Problem: Mimetic Contamination
Bayesian inference requires establishing prior probabilities—the initial degree of belief in a hypothesis before observing evidence. In practice, determining these priors is "extremely difficult" and "impractical," as eliciting a true prior distribution is cognitively demanding and people are "rarely rational and coherent". The AIM Framework explains why: what appears as a rationally derived prior belief is typically contaminated by mimetic desire operating below conscious awareness.^5^1
Mirror neuron systems respond to observable, goal-directed actions but not to unobservable internal states like private curiosity. When an individual observes models—people they notice, admire, or compare themselves to—pursuing certain hypotheses or expressing confidence in particular beliefs, mirror neurons fire as if the observer were performing that action themselves, creating an embodied simulation of the model's intention. This premotor activity has direct anatomical connections to ventral striatum, allowing observed goals to modulate the valuation signal assigned to the same targets in the observer's own choice space. Crucially, individuals with stronger premotor-to-ventral-striatum connectivity show larger increases in their own wanting after observing models, predicting individual susceptibility to mimetic influence.^6^1
The devastating consequence for Bayesian reasoning is that mimetic signals never reach conscious awareness and are not committed to memory. The loudness of these pre-conscious signals filters out their origins, so individuals have no recollection of why they hold a particular prior belief. When asked to state their degree of belief in a hypothesis—say, P(H) = 0.7—a person may confidently provide that number while being entirely unaware that the belief originated not from rational analysis but from observing prestigious experts or influential models expressing similar confidence. The brain then confabulates a logical reason based on available information to explain the belief, creating an illusion of rational prior assignment.^3
This mimetic contamination is amplified by visibility, social salience, and prestige cues. In scientific and professional contexts, hypotheses championed by high-status models, widely endorsed in the literature, or visibly pursued by multiple agents gain inflated prior probabilities independent of their evidential merit. Ventral striatum responds robustly to social approval and prestige cues, elevating the common-currency value of beliefs associated with high-status sources. As one critic notes, Bayesian analysis depends on priors that reflect "subjective assumptions," and critics like frequentists argue that probabilities must be "near objective numbers based on repeated occurrence of outcomes," not socially transmitted valuations. Yet the AIM Framework demonstrates that such social transmission is neurologically inevitable, not a correctable error.^7
Evidence Evaluation: Appetitive Hijacking
Even if priors could be rationally established, Bayesian updating requires unbiased evaluation of new evidence. Yet appetitive states systematically distort judgment. Homeostatic deficits—hunger, fatigue, sleep deprivation—amplify hypothalamic-orbitofrontal-striatal signals until corrective actions override ongoing goals. These urgent $w_A$ spikes can hijack the decision hub, creating "hunger bias" that influences moral judgments, risk assessments, and belief updating in ways orthogonal to evidential quality.^8^10
Research demonstrates that hunger affects non-profitable acceptability judgments independent of intuitive decision-making style and emotional state^8. Orbitofrontal cortex neurons show state-dependent responding—the same stimulus activates reward-coding neurons strongly when an organism is deprived but weakly when sated, demonstrating that value assignment continuously recalibrates with physiological state^1. For Bayesian inference, this means that the weight an agent assigns to evidence P(E|H) will vary systematically with bodily deficit levels, not just with the logical strength of the evidence.
In practical settings, this produces predictable distortions. An investor evaluating market data late at night while fatigued will assign different likelihoods to hypotheses than when well-rested. A scientist reviewing experimental results while hungry may judge negative findings more harshly or be more susceptible to confirmation bias that promises quicker resolution. These are not mere performance decrements—they reflect fundamental alterations in the brain's valuation system that determine which evidence receives attention and how its implications are weighted. Bayes' Theorem has no mechanism to detect or correct for these appetitive intrusions, yet they are ubiquitous in real-world reasoning.^10
Intrinsic Motivation: Persistence Bias and Crowding Out
Intrinsic motivation creates its own form of Bayesian dysfunction through persistence bias and crowding out. When individuals are intrinsically engaged with a hypothesis—finding the process of exploring it inherently rewarding—they exhibit resistance to disconfirming evidence that would, under rational Bayesian updating, reduce posterior probability. Curiosity states activate distinctive midbrain-hippocampus coupling that enhances learning for sought information but also for incidental material encountered during exploration, creating privileged memory consolidation for paths aligned with intrinsic goals.^1
This mechanism explains why scientists become attached to pet theories. The intrinsic reward from years of mastery-driven exploration ($w_I$ dominance) creates valuations that persist "across contexts and time, remaining valuable even without external rewards, social approval, or audience visibility"^1. When disconfirming evidence arrives, it threatens not just the hypothesis but the intrinsically rewarding process itself, leading to motivated reasoning that preserves the ability to continue intrinsically valued work^11^13. From an AIM perspective, the scientist's posterior probability is not simply P(H|E) calculated from priors and likelihoods, but rather a weighted function where high $w_I$ inflates the denominator, making disconfirmation less likely to trigger action-threshold switches to alternative hypotheses^1.
Furthermore, controlling contexts crowd out intrinsic motivation. When Bayesian updating is imposed through surveillance, rankings, or external pressure—common in research institutions and policy analysis—intrinsic pathways are suppressed and $w_M$ rises relative to $w_I$. This shift makes belief updating more sensitive to prestige signals and social comparison rather than evidential quality, paradoxically making formal Bayesian analysis less rational the more rigorously it is enforced through institutional oversight.^1
The Common-Currency Integration: Pre-Conscious Valuation
Perhaps the most fundamental challenge AIM poses to practical Bayesianism is that belief valuation occurs in a pre-conscious common-currency system before conscious deliberation. The ventromedial prefrontal cortex and ventral striatum integrate $w_A$, $w_I$, and $w_M$ inputs into a unified subjective value signal used for action selection without the agent's awareness of the weighting process. Bayes' Theorem presumes that individuals can introspect their degrees of belief and report them as probabilities, yet the neural architecture makes such introspection systematically unreliable.^1
When a researcher states P(H) = 0.8, that number represents the output of a hidden integration process combining homeostatic states, intrinsic engagement levels, and mimetic influences—not a transparent rational assessment. Computational neuroscience shows that this integration maintains switch costs and task inertia, so options must exceed current activity value by a context-dependent margin before triggering behavioral change. For Bayesian updating, this means that even when evidence logically warrants shifting to a new hypothesis, the brain's threshold-and-switching mechanisms create stickiness that violates the normative updating rule.^1
Research in computational complexity demonstrates that Bayesian computations over realistic network structures are computationally intractable even for idealized agents. When humans must navigate high-dimensional hypothesis spaces with unknown dependencies, "the computation of C may be performed quite fast even for large Bayesian networks" only if the network has "relatively small treewidth" and "relatively few possible values for each variable". Yet real-world belief systems rarely exhibit such convenient structure. The AIM Framework suggests that rather than approximating intractable Bayesian calculations, the brain substitutes them with a tractable common-currency heuristic that was never designed to implement Bayesian logic.^14
Mimetic Desire and the Illusion of Objectivity
The most insidious barrier to practical Bayesianism is that mimetic desire creates an illusion of objectivity. Because mirror neuron activity and social valuation signals are filtered out before reaching consciousness, individuals genuinely believe their priors and posteriors reflect rational analysis when they are in fact substantially mimetically determined. This is not a correctable bias—it is a structural feature of the neural architecture.^3
René Girard observed that "man is the creature who does not know what to desire, and he turns to others in order to make up his mind". The AIM Framework grounds this insight in specific circuits: mirror systems in parietal and premotor cortex detect goal-directed actions, social reward circuits amplify prestige-laden targets, and both modulate ventral striatum valuation independently of appetitive or intrinsic considerations. When multiple agents converge on the same hypothesis—creating mimetic rivalry—the escalation of $w_M$ produces overconfidence and polarization that rational Bayesian discourse cannot resolve.^4^6^1
Empirical research confirms that "goal contagion" occurs when participants rate objects as more desirable after observing them as another agent's goal, with MNS–BVS effective connectivity predicting individual susceptibility to mimetic influence. In scientific contexts, this manifests as citation cascades, bandwagon effects, and theoretical fads where posterior probabilities cluster mimetically rather than tracking evidential warrant. Critics of Bayesianism like Gelman note that the "principal-agent problem" arises when "the Bayesian agent is the methodological fiction" that operates with well-defined hypothesis spaces and rational conditioning, while "the principal is the actual statistician" whose beliefs are shaped by institutional pressures, prestige hierarchies, and social observation.^11
Practical Failures of Bayesian Reasoning
The theoretical problems AIM identifies manifest in well-documented practical failures of Bayesian reasoning. Subjective prior selection remains the "most common critique" because "the very act of choosing a prior distribution immediately assumes that the true model is in the support of the prior," and there is no guarantee the true model is even being considered. In medical diagnosis, for example, Bayesian screening tests are limited by inherent constraints on predictive value as a function of disease prevalence, and "sequential testing" attempts to overcome this through repeated updating—but each iteration compounds the mimetic and appetitive distortions AIM describes.^16
In A/B testing and decision science, practitioners report that people "don't actually do" full Bayesian reasoning "because it's hard," instead substituting "easier" heuristics. This is not mere laziness—AIM suggests it reflects the brain's actual architecture, which was never designed to implement probability calculus but rather to integrate heterogeneous motivational signals for survival-relevant action selection. The common lament that Bayesian methods are "computationally intensive and time-consuming, requiring powerful computers and specialized software" misses the deeper point: even with infinite computation, humans cannot reliably provide the inputs Bayesian algorithms require.^17^1
Financial forecasting, fraud detection, and policy analysis all struggle with Bayesian methods because priors and likelihoods are extracted from agents whose valuations are distorted by the very factors AIM describes. As one analyst observes, "paranoid, crackpot hypotheses can still maintain perfect probabilistic coherence" because coherence is a mathematical property, not a guarantee of epistemic rationality when priors are mimetically or appetitively determined. The fundamental problem is that Bayesian coherence is necessary but not sufficient for truth-tracking when the psychological foundations of belief are systematically non-rational.^19^21
Why AIM Explains What Bayes Cannot
The AIM Framework is not merely another critique of Bayesian subjectivity—it provides a mechanistic, neuroscientifically grounded explanation for why the known limitations persist across contexts and cannot be overcome through training or better methodology. Standard critiques note that Bayes' Theorem is "sensitive to the prior probability" and "depends on the model", but offer no account of where priors come from or why models are chosen. AIM fills this gap by showing that priors emerge from pre-conscious integration of appetitive states, intrinsic engagements, and mimetic observations, while model selection is itself subject to $w_M$ amplification when prestigious researchers favor particular frameworks.^2^3
Philosophers have long argued that Bayes' Theorem "cannot create new explanations" and merely "assigns probabilities to existing ideas". AIM extends this by revealing that the existing ideas themselves—the hypothesis space over which probabilities are distributed—is constructed through mimetic observation of what models pursue, not through systematic exploration of logical possibility. The much-discussed "old evidence problem" in Bayesian epistemology, where existing data has "zero confirming power" for newly formulated hypotheses, reflects the deeper issue that mimetic desire focuses on observable, object-directed pursuits rather than abstract evidential relationships that lack social salience.^22^4
Implications for Scientific Practice and Policy
If the AIM Framework is correct, the implications for fields relying on Bayesian methods are profound. Rather than treating prior elicitation as a technical challenge requiring better elicitation protocols, we must recognize it as a fundamentally intractable problem given human neural architecture. This does not mean abandoning probabilistic reasoning, but rather designing institutional practices that bound mimetic influence, stabilize appetitive states, and protect intrinsic engagement to reduce systematic distortions.^23^1
Practically, this suggests several interventions. Reduce observability of preliminary beliefs to prevent mimetic cascades where high-status early adopters determine community priors. Implement adversarial collaboration structures that deliberately elevate $w_I$ over $w_M$ by rewarding competence and truth-seeking over positional victories. Ensure appetitive sufficiency in research environments—adequate rest, nutrition, and recovery—so that physiological hijacking does not contaminate evidence evaluation during critical peer review or grant assessment. Design audience-removal tests where key judgments are made privately before social exposure to verify stability under reduced $w_M$.^23
For policy analysis using Bayesian decision theory, AIM implies that expert elicitation procedures are systematically corrupted unless they explicitly manage mimetic pressure and appetitive states. Rather than treating experts as ideal Bayesian agents whose stated probabilities reflect pure epistemic rationality, we should model them as AIM agents whose valuations blend source-specific inputs, then design aggregation mechanisms that filter mimetic clustering and appetitive volatility. This reframes the challenge from "how do we extract true probabilities from experts?" to "how do we design institutions that keep $w_I$ leading while $w_A$ is regulated and $w_M$ is bounded?".^23
The Neuroscientific Case Against Bayesian Rationality
The deeper lesson of AIM is that Bayesian rationality presumes a brain architecture that does not exist. The ventromedial prefrontal cortex and ventral striatum were not selected to implement probability calculus but to integrate survival-relevant signals—homeostatic needs, learning opportunities, and social information—into fast, adaptive action under ecological constraints. The fact that this system sometimes produces outputs that approximate Bayesian posteriors in simple, well-structured problems with clear feedback does not imply it was designed for, or can be extended to, the complex hypothesis spaces and ambiguous evidence that characterize most real-world inference.^14
Computational modeling confirms that exact Bayesian inference is NP-hard for realistic network structures, and even approximation algorithms are tractable only under restrictive assumptions about connectivity and cardinality that rarely hold in cognitive domains. Rather than struggling to make Bayesian models tractable through ever-more-sophisticated approximations, AIM suggests we should model cognition as it actually operates: through parallel evaluation of state-dependent homeostatic signals, persistent intrinsic valuations, and rapidly updating social cues, integrated via a common-currency system optimized for speed and robustness rather than logical coherence.^14
This does not mean human inference is irrational—it means rationality must be redefined relative to the architecture that implements it. From an AIM perspective, "freedom" is not the ability to be a perfect Bayesian updater but rather "the capacity for Intrinsic Motivation to lead action while Appetites are regulated and Mimetic pressures are recognized and managed". Epistemic rationality, then, is not conformity to Bayesian norms but the institutional and personal practices that keep $w_I$ dominant during belief formation, ensuring that curiosity, competence-seeking, and truth-tracking drive hypothesis evaluation rather than mimetic conformity or appetitive hijacking.^23
Conclusion: From Bayesian Ideals to AIM Realism
Bayes' Theorem remains a powerful mathematical tool for understanding coherent belief updating under idealized conditions. Yet its practical failure across domains—from medical diagnosis to financial forecasting to scientific inference—is not an accident of poor methodology or insufficient training. The AIM Motivational Framework reveals that human decision-making is neurologically incompatible with the assumptions Bayesian reasoning requires. Prior probabilities emerge from pre-conscious integration of mimetic, appetitive, and intrinsic signals that individuals cannot introspect or control. Evidence evaluation is systematically distorted by homeostatic states and persistence biases. Belief updating occurs through threshold-based action selection in a common-currency system that was never designed to implement probability calculus.^1
Rather than lamenting this gap between Bayesian ideals and human reality, we should embrace AIM-based realism that acknowledges the neural sources of motivation and designs epistemic practices around them. This means replacing the fiction of the rational Bayesian agent with an accurate model of the AIM agent—one whose beliefs are shaped by bodies, intrinsic engagements, and social worlds—and building institutions that harness rather than deny these influences. The path forward is not better Bayesianism but better metrics: recognizing that $w_A$, $w_I$, and $w_M$ are the fundamental variables determining human choice, and designing our epistemic institutions accordingly.^2^1