Cognitive science asks how minds work — how humans perceive, learn, reason, and decide. Over the past two decades, a powerful theoretical framework has emerged: the Bayesian brain hypothesis, which proposes that the mind represents uncertainty probabilistically and updates beliefs approximately according to Bayes' theorem. This framework does not claim that humans are perfect Bayesian reasoners, but rather that Bayesian inference provides the computational-level theory — the standard of rational inference — against which actual human cognition can be measured, and that many cognitive phenomena are well explained as approximate Bayesian computations.
The Bayesian Brain
The Bayesian brain hypothesis holds that the nervous system maintains probabilistic models of the environment and uses Bayes' rule to combine sensory evidence with prior expectations. This explains a wide range of perceptual phenomena: visual illusions arise when priors dominate ambiguous evidence, sensory adaptation corresponds to updating the prior, and multisensory integration follows the optimal Bayesian cue-combination rule — weighting each sense by its reliability.
Optimal Integration (Gaussian Case) μ̂ = (μ_v/σ²_v + μ_h/σ²_h) / (1/σ²_v + 1/σ²_h)
1/σ̂² = 1/σ²_v + 1/σ²_h
Where v = visual estimate, h = haptic estimate
The combined estimate is more precise than either alone
Experimental evidence for Bayesian cue combination is remarkably strong. Subjects locating an object by sight and touch integrate the two estimates with weights proportional to the reliability of each sense, exactly as predicted by Bayesian optimality. When one sense is degraded (e.g., visual noise is added), the weight shifts to the other, again matching Bayesian predictions quantitatively.
Bayesian Models of Learning and Generalization
Bayesian models explain how humans learn concepts, categories, and causal structures from limited data. A child who hears "dog" applied to a few examples infers the correct extension of the word — not too narrow (only that specific dog) and not too broad (all animals). The Bayesian account holds that learners maintain a hypothesis space of possible word meanings, update it with each labeled example, and generalize according to the posterior predictive distribution. The "size principle" — that smaller, more specific hypotheses gain a larger likelihood boost from consistent evidence — explains the rapid narrowing of word meanings that children exhibit.
The relationship between Bayesian models and the heuristics-and-biases tradition (Kahneman and Tversky) is complex. Some apparent biases dissolve under Bayesian analysis — for instance, "conservatism" in probability updating may reflect rational behavior with uncertain likelihood functions, and base rate neglect may reflect reasonable priors about the diagnosticity of evidence. Others, like anchoring and availability, are harder to reconcile with Bayesian rationality. The emerging consensus is that the mind implements resource-rational approximations to Bayesian inference — optimal given limited computational resources — rather than exact Bayesian computation.
Bayesian Decision Theory and Utility
Bayesian decision theory prescribes choosing the action that maximizes expected utility with respect to the posterior distribution over states of the world. This provides the normative framework for studying human decision-making. Departures from Bayesian optimality — risk aversion, loss aversion, temporal discounting, framing effects — can be analyzed as either non-standard utility functions (still consistent with Bayesian reasoning) or as failures of the inference process (non-Bayesian belief updating).
"The mind is not a blank slate that passively receives sensory input. It is an active inference machine that generates predictions, compares them with evidence, and updates its model of the world. This is Bayes' theorem in neural hardware." — Karl Friston, architect of the free energy principle
Predictive Processing and the Free Energy Principle
Karl Friston's free energy principle proposes that the brain minimizes variational free energy — an upper bound on surprise (negative log evidence) — through perception, action, and learning. Perception corresponds to approximate Bayesian inference (updating beliefs to minimize prediction error), action corresponds to active inference (changing the world to match predictions), and learning corresponds to updating the generative model. This ambitious framework unifies perception, action, and learning under a single Bayesian variational principle, though its empirical testability remains debated.
Bounded Rationality and Resource-Rational Analysis
Herbert Simon's bounded rationality — the observation that real agents have limited computational resources — finds natural expression in the Bayesian framework. Resource-rational analysis asks: what is the optimal inference algorithm given constraints on time, memory, and precision? The answer often looks like familiar cognitive heuristics — but derived as optimal approximations rather than arbitrary shortcuts. Monte Carlo sampling provides one such account: if the mind draws only a few samples from the posterior, it reproduces many known cognitive biases, including recency effects, probability distortion, and unpacking effects.
Theory of Mind and Social Cognition
Bayesian theory of mind models how humans infer the beliefs, desires, and intentions of other agents. Observing someone's actions, we perform inverse Bayesian inference: given the action, what goals and beliefs would make that action rational? This recursive reasoning — I believe that you believe that I believe — is formalized through nested Bayesian models and provides computational accounts of communication, teaching, deception, and cooperation that match human behavior in experimental settings.