Over the past two decades a number of different approaches to “fuzzy probabilities” have been presented. The use of the same term masks fundamental differences. This paper surveys these different theories, contrasting and relating them to one another. Problems with these existing approaches are noted and a theory of “linguistic probabilities” is developed, which seeks to retain the underlying insights of existing work whilst remedying its technical defects. It is shown how the axiomatic theory of linguistic probabilities can be used to develop linguistic Bayesian networks which have a wide range of practical applications. To illustrate this a detailed and realistic example in the domain of forensic statistics is presented.