The Central Test: Persistence in Private
Intrinsic Motivation Persists When Unobserved
The defining characteristic of intrinsic motivation is that it persists in private—when there's no audience, no social signaling, no status competition. Measuring output of unobserved activities is how we test for true intrinsic engagement.
The Three Signatures:
- Appetites (A): Cyclical, satiable, state-dependent — behavior stops when physiological need is met
- Intrinsic Motivation (I): Persists in private, autonomy-aligned, context-independent — behavior continues without observation
- Mimetic Desire (M): Observability-sensitive, rivalry-prone — behavior decreases or stops when audience is removed
Why This Matters
Traditional motivation research struggles to distinguish intrinsic engagement from status-seeking because both can look identical under observation. AIM solves this by manipulating observability: If behavior persists when made private, it was intrinsic. If it drops, it was mimetic.
Distinguishing A, I, and M: Three Methods
Method 1: Measure Physiological State (for A)
Appetitive motivation is state-dependent and physiologically measurable. Control for basic needs to isolate I and M.
Physiological Markers:
- Glucose levels, hydration status
- Sleep duration and quality
- Core temperature regulation
- Circadian rhythm patterns
Behavioral Signatures:
- Satisfaction upon consumption (behavior stops when sated)
- Predictable cyclical patterns
- Reduced wanting after physiological need is met
Method 2: Manipulate Observability (for I vs M)
This is the critical test. Compare behavior under public versus private conditions to isolate intrinsic from mimetic motivation.
Experimental Design:
- Condition A (Public): Activity is observable, visible to others, status-relevant
- Condition B (Private): Same activity, completely unobserved, no social signaling possible
- Measure: Time spent, effort exerted, output produced, persistence over time
Predicted Outcomes:
- • If I-dominated: No decrease (or slight increase) in private condition
- • If M-dominated: Significant decrease in private condition
- • If mixed: Partial decrease reveals relative weights wI and wM
Method 3: Longitudinal Tracking (for stability)
Intrinsic motivation shows stable persistence over time (6-12+ months), while mimetic motivation tracks social context and shows volatility.
Track Over Time:
- Engagement levels without external rewards
- Response to social context changes
- Stability across different environments
- Resistance to mimetic triggers
Calculating the Mimetic Premium
A key empirical method for quantifying motivational sources is decomposing prices and valuations into appetitive (A), intrinsic (I), and mimetic (M) components.
The Mimetic Premium (VM)
The excess value paid for social signaling, separate from basic need satisfaction and intrinsic quality. This reveals how much of a price is driven by status competition versus actual use value.
Example: Premium Bottled Water
- Total Price: $5.00
- A-component (Appetite): $0.50 — basic hydration value
- I-component (Intrinsic): $0.50 — taste, convenience, quality preference
- M-component (Mimetic): $4.00 — brand status, social signaling
Method 1: Observability Price Premium
Compare willingness-to-pay for identical products under public versus private consumption conditions.
Design:
- Condition A (Public): Product will be visibly consumed in social setting
- Condition B (Private): Product consumed alone, not visible to others
- Measure: Maximum willingness-to-pay in each condition
- Calculate: Mimetic Premium = WTPpublic - WTPprivate
Prediction: Products with high M-components (luxury brands, status goods) show large price premiums in public conditions. I-driven products show minimal or no premium.
Method 2: Hedonic Price Decomposition
Use regression analysis to isolate the contribution of functional attributes (A+I) versus brand/status attributes (M).
Approach:
- 1. Identify functional attributes (nutritional value, durability, performance specs)
- 2. Identify status attributes (brand prestige, celebrity endorsement, exclusivity)
- 3. Regress price on both sets of attributes
- 4. VM = coefficient on status attributes × attribute level
Method 3: Experimental Valuation Under Anonymity
Manipulate whether purchases are anonymous or publicly visible, then measure valuation changes.
Protocol:
- • Baseline: Elicit WTP for products in standard marketplace conditions
- • Anonymous condition: Guarantee that purchase is completely private (unmarked packaging, no receipts, anonymous payment)
- • Public condition: Purchase visible to relevant social group
- • Compare: Valuation changes reveal M-component sensitivity
Applications of Mimetic Premium Measurement
- • Market analysis: Predict price sensitivity to social visibility changes
- • Brand positioning: Quantify how much value comes from status versus quality
- • Consumer segmentation: Identify high-M versus high-I consumer segments
- • Pricing strategy: Test whether premium pricing is justified by functional value or requires social signaling
- • Product categories: Map entire industries by A/I/M composition (e.g., luxury goods = high M, tools = high I, groceries = high A)
Validated Measurement Scales
Combine observability manipulation with validated psychological scales for robust measurement:
For Intrinsic Motivation (I)
- • Intrinsic Motivation Inventory (IMI)
- • Flow State Scale
- • Basic Psychological Needs Scale (autonomy subscale)
- • Self-Determination Index
Behavioral indicators: Time distortion during activity, persistence without rewards, resistance to overjustification effect
For Mimetic Desire (M)
- • Social Comparison Orientation Scale
- • Status-Seeking Scale
- • Observability sensitivity measures (custom)
Experimental manipulations: Model status variations, audience presence/absence, visibility conditions
Five Core Experimental Designs
1. Private-Then-Public Sequence
Test whether the SEQUENCE of feedback affects motivational source. AIM predicts that establishing private competence before introducing social comparison preserves intrinsic motivation.
Design:
- Condition A: Private competence feedback → 1 week later → public ranking
- Condition B: Public ranking → private feedback
- Condition C: Private feedback only (control)
- Measure: Subsequent engagement when made fully private (6 weeks later)
Falsification: If Condition A performs like Condition B, sequence timing doesn't matter (wI protection fails)
2. Audience Removal Test
Compare performance/engagement before and after removing social observation. The drop in output reveals mimetic weight.
Design:
- Phase 1: Public activity with audience/ranking (baseline)
- Phase 2: Same activity, completely private (no visibility)
- Measure: % change in time, effort, output
Example applications: Legal settlements (sealed vs public), exercise programs (solo vs group), work output (remote vs office)
3. Model Status Manipulation
Vary the status of social models to test whether behavior change tracks model characteristics rather than object attributes (mimetic mechanism).
Design:
- Condition A: High-status model demonstrates choice
- Condition B: Low-status model demonstrates same choice
- Condition C: No model (control)
- Measure: Adoption rate and persistence
Falsification: If status makes no difference, mimetic transmission claim fails
4. Appetitive Control Protocol
Ensure physiological needs are met before testing I vs M to prevent appetitive deficits from confounding results.
Standardize Before Testing:
- • Meal timing (test 2-3 hours after eating)
- • Sleep duration (minimum 7 hours)
- • Hydration status
- • Environmental comfort (temperature, noise)
Why critical: Hungry, tired, or uncomfortable participants cannot reliably demonstrate intrinsic engagement
5. Longitudinal Stability Test
Track behavior over 6-12+ months to distinguish stable intrinsic engagement from volatile mimetic patterns.
Design:
- High-I condition: Design emphasizing skill progression, private milestones, autonomy
- High-M condition: Design emphasizing leaderboards, social comparison, status markers
- Measure: 6-month and 12-month retention curves
Falsification: If both show identical retention, I vs M classification fails
Statistical Analysis Methods
Mediation Analysis
Test whether changes in motivational source (wA, wI, or wM) mediate the relationship between interventions and outcomes. This validates that the proposed mechanism is actually responsible for observed effects.
Within-Subject Designs
Use participants as their own controls when manipulating observability. Measure the same person's behavior under public and private conditions to isolate mimetic effects while controlling for individual differences.
Pre-registration Required
All hypothesis tests must be pre-registered with explicit predictions, sample sizes, analysis plans, and falsification criteria to prevent p-hacking and selective reporting. Include predicted effect sizes.
Bayesian Evidence Accumulation
Use Bayesian statistics to quantify evidence for and against AIM predictions, allowing accumulation across studies rather than binary reject/accept decisions. Report Bayes Factors for each prediction.
Framework-Level Falsification Criteria
The AIM Framework Would Be Falsified If:
- The three sources cannot be reliably distinguished: If manipulating observability, controlling physiological state, and longitudinal tracking fail to produce consistent separations between A, I, and M, the framework lacks empirical grounding.
- Private behavior shows no persistence pattern: If removing observation has random or inconsistent effects on behavior (rather than consistently revealing I vs M), the core distinction fails.
- Interventions targeting specific sources produce opposite effects: If removing audience increases mimetic behavior, or if meeting appetitive needs decreases intrinsic engagement, the causal mechanisms are wrong.
- Cross-domain predictions systematically fail: If the same mechanisms (e.g., observability manipulation) work in one domain (education) but fail in others (health, law, organizations), the framework isn't general.
- Alternative models consistently outperform AIM: If simpler models (single motivation source, or traditional utility) make better predictions without the A/I/M distinction, Occam's razor favors the alternative.
Cross-Domain Validation Strategy
A critical test of AIM is whether the same observability manipulations produce consistent results across diverse domains:
Education Domain
Does removing public ranking increase intrinsic learning engagement? Test with private-then-public feedback sequence.
Health Domain
Does removing social visibility increase exercise persistence? Test flow-based vs appearance-based program retention.
Legal Domain
Does sealing proceedings increase settlement rates? Test public vs confidential mediation outcomes.
Organizational Domain
Does reducing visibility of individual metrics decrease rivalry? Test private vs public performance feedback.
Validation requirement: The same mechanism (audience removal reducing M-driven behavior) must work across ALL domains for the framework to be considered validated.
Open Science Commitments
Open Data
Share de-identified datasets including all observability manipulations, physiological measurements, and longitudinal tracking to enable reanalysis and meta-analysis
Pre-registration
Register all hypotheses, methods, and analyses before data collection, including predicted effect sizes for observability manipulations
Direct Replications
Actively support exact replications of observability experiments across different labs and populations
Multi-Lab Collaborations
Coordinate large-scale studies testing the same observability manipulations across multiple domains simultaneously
Ready to Test AIM in Your Research?
Access detailed protocols for observability manipulation experiments, measurement tools, and collaboration opportunities.
Get Research Materials