Neuroscience faces the challenge of inferring what the brain is doing — what a subject perceives, intends, or remembers — from noisy, indirect, and high-dimensional neural recordings. Whether the data come from single-neuron recordings, EEG, fMRI, or calcium imaging, the inference problem is fundamentally Bayesian: given the observed neural activity, what is the posterior distribution over the stimulus, intention, or cognitive state that produced it? This perspective unifies neural decoding, brain-computer interfaces, and computational models of neural information processing.
Bayesian Neural Decoding
Neural decoding reconstructs a stimulus or behavior from recorded neural activity. A Bayesian decoder computes the posterior distribution over the decoded variable given the spike trains or signal amplitudes, combining a neural encoding model (how neurons respond to stimuli) with a prior on the stimulus (e.g., movement trajectories are smooth).
Where s = stimulus or intended movement (continuous or discrete)
r = {r₁, r₂, ..., rₙ} = recorded neural activity (spike counts, LFP, etc.)
P(r | s) = encoding model (tuning curves, GLM, or neural network)
P(s) = prior on stimulus (e.g., continuity, biomechanical constraints)
For population decoding with independent Poisson spiking neurons and log-concave tuning curves, the Bayesian decoder is equivalent to a population vector with optimal weighting. But the Bayesian framework extends naturally to non-Poisson noise, correlated neurons, history-dependent firing, and complex priors — all of which are essential for accurate decoding from real neural data.
Brain-Computer Interfaces
Brain-computer interfaces (BCIs) translate neural activity into control signals for prosthetic limbs, computer cursors, or communication devices. The decoding algorithm at the heart of a BCI is a real-time Bayesian filter — typically a Kalman filter or point-process filter — that sequentially updates the posterior estimate of intended movement as new neural data arrive every few milliseconds.
The most widely used BCI decoder is the Kalman filter, which models intended velocity as a linear dynamical system with Gaussian noise, observed through a linear mapping from motor cortex firing rates. Despite its simplicity, the Kalman filter achieves remarkable performance in intracortical BCIs — users can control computer cursors and robotic arms with neural signals alone. Extensions to nonlinear dynamics (unscented Kalman filter), point-process observation models, and recurrent neural network decoders improve performance further, but all share the same Bayesian sequential updating logic.
Neural Encoding Models
Understanding how neurons encode information requires fitting encoding models — statistical models of how neural activity depends on stimuli, behavior, and internal state. Bayesian estimation of generalized linear models (GLMs) for spike trains provides posterior distributions over tuning curve parameters, coupling filters, and history effects. Hierarchical Bayesian models share information across neurons, sessions, and subjects, enabling stable estimation even from limited recording time.
"The brain itself is a Bayesian machine — it infers the causes of sensory input from noisy, ambiguous signals. It is fitting that our methods for understanding the brain should be Bayesian as well." — Alexandre Pouget, University of Geneva
Neural Population Dynamics
Modern neuroscience views neural activity as trajectories through a high-dimensional state space, with latent dynamics that evolve on low-dimensional manifolds. Bayesian latent variable models — Gaussian process factor analysis (GPFA), latent linear dynamical systems, and switching linear dynamical systems — extract these latent dynamics from noisy population recordings. The posterior distribution over latent trajectories reveals the computational structure of neural circuits, such as the rotational dynamics in motor cortex during movement planning.
Connectomics and Circuit Inference
Bayesian methods infer synaptic connectivity from observational data — cross-correlation of spike trains, calcium imaging time series, or optogenetic perturbation experiments. The posterior probability of a connection between two neurons, given the observed statistical dependencies and the prior expected sparsity of connectivity, provides a principled alternative to ad hoc thresholding of correlation measures. Bayesian network inference methods are being applied to large-scale connectomic datasets to map the wiring diagrams of neural circuits.