
Bayes’ rule, as one of the fundamental concepts of statistical signal processing, provides a way to update our belief about an event based on the arrival of new pieces of evidence. Uncertainty is traditionally modeled by a probability distribution. Prior belief is thus expressed by a prior probability distribution, while the update involves the likelihood function, a probabilistic expression of how likely it is to observe the evidence. It has been argued by many statisticians, however, that a broadening of probability theory is required because one may not always be able to provide a probability for every event, due to the scarcity of training data.
Scope
Following the theoretical foundations of imprecise probability theory by Walley [1], this “Lecture Notes” column presents a formulation and practical computation of Bayes’ rule in situations where the probabilistic models (i.e., the prior distribution and the likelihood function) are imprecise.