In Bayesian epistemology and statistics, a credence (sometimes called a degree of belief or subjective probability) is the fundamental unit of epistemic attitude. Rather than the all-or-nothing stance of traditional epistemology — you either believe something or you do not — Bayesian agents maintain graded beliefs. A credence of 0.9 in the proposition "it will rain tomorrow" expresses high but not absolute confidence; a credence of 0.5 expresses maximal uncertainty between rain and no rain.
This numerical representation of belief is not merely a metaphor. Credences are genuine probabilities: they must satisfy the Kolmogorov axioms (non-negativity, normalization, and additivity for mutually exclusive events) and they update via Bayes' theorem when new evidence arrives. The entire apparatus of Bayesian inference — priors, likelihoods, posteriors, predictive distributions — is built on the foundation of credences as probabilities.
Where F is a set of propositions (a sigma-algebra)
Cr(~A) = 1 - Cr(A)
Cr(A ∪ B) = Cr(A) + Cr(B) when A and B are mutually exclusive
Cr(Ω) = 1 (tautologies receive credence 1)
From Belief to Number
The central question is: how can something as elusive as belief be assigned a precise number? Frank Ramsey proposed the answer in 1926: credences are measured by their behavioral consequences, specifically through betting behavior. If you are willing to pay up to 70 cents for a bet that pays 1 dollar if it rains tomorrow and nothing otherwise, then your credence in rain is 0.7. Ramsey showed that if your preferences among bets satisfy certain consistency axioms, they can be uniquely represented by a probability function — your credence function.
This operationalist approach has been refined in various ways. Savage (1954) derived credences from preferences over acts in a more general decision-theoretic framework. De Finetti showed that credences revealed through betting must satisfy the probability axioms on pain of sure loss (the Dutch book argument). More recently, accuracy-based approaches argue that credences should be probabilities because probabilistic credences are the most accurate possible — they minimize expected distance from the truth.
A persistent question in epistemology is the relationship between credences and "full" or "outright" beliefs. The Lockean thesis proposes that you believe a proposition if and only if your credence in it exceeds some threshold t (typically greater than 0.5). But this leads to the lottery paradox: for each ticket in a million-ticket lottery, your credence that it will lose exceeds any reasonable threshold, so you believe each ticket will lose — yet you also believe some ticket will win. This tension between graded and categorical belief remains an active area of philosophical research.
Properties of Rational Credences
Bayesian epistemology places normative constraints on credences. At any given time, a rational agent's credences should be coherent — that is, they should satisfy the probability axioms. Over time, credences should update by conditionalization: upon learning E, the new credence in H should be the old conditional credence Cr(H | E).
These constraints have substantive consequences. They imply, for example, that credences must respect logical entailment: if A entails B, then Cr(A) ≤ Cr(B). They imply that credences in a proposition and its negation must sum to 1. And they imply the law of total probability: Cr(H) = Cr(H | E) · Cr(E) + Cr(H | ~E) · Cr(~E).
Calibration and Accuracy
A credence function is well-calibrated if, among all propositions assigned credence p, the proportion that turn out true is approximately p. Perfect calibration means that your 70% credences come true 70% of the time, your 90% credences come true 90% of the time, and so on. Calibration is an empirically testable property of a credence function and is one of the primary ways of assessing epistemic performance.
Where I(Hᵢ) = 1 if Hᵢ is true, 0 if false
Lower Brier score = more accurate credences
The Brier score, introduced by Glenn Brier in 1950 for weather forecasting, measures the mean squared distance between credences and truth values. It is a strictly proper scoring rule: it is uniquely minimized in expectation when the agent reports her true credences. This property makes it a natural tool for both evaluating and eliciting honest credences.
Imprecise Credences
Some epistemologists argue that a single precise credence function is too demanding. When evidence is sparse or ambiguous, it may be more rational to adopt a set of credence functions — an approach known as imprecise probability or mushy credences. On this view, endorsed by philosophers such as Isaac Levi and Peter Walley, an agent's epistemic state is represented by a convex set of probability functions rather than a single one. Decisions are made by requiring that every function in the set agree on the preferred act.
This approach is particularly attractive for modeling the distinction between risk (known probabilities) and ambiguity (unknown probabilities), as famously illustrated by the Ellsberg paradox. It also provides a more nuanced account of prior ignorance: rather than choosing a single "uninformative" prior, the agent can represent genuine ignorance by starting with a wide set of priors and allowing data to narrow the set over time.
"Probability does not exist. The abandonment of superstitious beliefs about the existence of Phlogiston, the Cosmic Ether, Absolute Space and Time... or Fairies and Witches, was an essential step along the road to scientific thinking. Probability, too, if regarded as something endowed with some kind of objective existence, is no less a misleading misconception." — Bruno de Finetti, Theory of Probability (1974)
Credences in Practice
In applied Bayesian statistics, credences are operationalized as prior distributions. When a statistician specifies a prior P(θ), she is encoding her credences about the parameter θ before data collection. The posterior P(θ | D) then represents her updated credences after observing data D. Credible intervals summarize the posterior: a 95% credible interval is a region to which the statistician assigns credence 0.95.
Credences also play a central role in forecasting and prediction markets. Organizations like the Good Judgment Project have shown that trained forecasters who think explicitly in terms of calibrated credences — regularly updating numerical probabilities in response to new information — consistently outperform those who rely on qualitative assessments. This vindicates the Bayesian epistemologist's contention that thinking in terms of credences is not just philosophically elegant but practically superior.
Example: Forecasting an Election
An election forecaster is asked: "Will Candidate A win the governor's race?" Instead of saying "probably" or "it's a toss-up," she assigns a precise credence — a number between 0 and 1 representing her degree of belief.
Credences at Work
Three months before the election, based on polling, fundraising data, and historical trends, she assigns:
This is not a claim about frequencies — this election happens exactly once. The 0.62 is her personal, evidence-based degree of confidence. Two weeks later, a major scandal breaks involving Candidate A. She updates:
Then new polling shows voters have largely forgiven the scandal:
Calibration — The Test of Good Credences
After years of forecasting, the forecaster checks her calibration: of all events she assigned ~60% credence, did roughly 60% actually happen? If yes, her credences are well-calibrated — they accurately map internal confidence to real-world outcomes.
"Probably" means different things to different people — studies show interpretations range from 55% to 90%. A credence of 0.62 is unambiguous. It can be tracked over time, tested for calibration, combined with decision theory (how much should you bet at these odds?), and compared across forecasters. This precision is why organizations like the Good Judgment Project insist on numerical credences — and why their "superforecasters" consistently outperform intelligence analysts who use vague verbal assessments.