Bayesian probability models uncertain knowledge and learning from observations. As a defining feature of optimal adversarial behaviour, Bayesian reasoning forms the basis of safety properties in contexts such as privacy and fairness. Probabilistic programming is a convenient implementation of Bayesian reasoning but the adversarial setting imposes obstacles to its use: approximate inference can underestimate adversary knowledge and exact inference is impractical in cases covering large state spaces. By abstracting distributions, the semantics of a probabilistic language, and inference, jointly termed probabilistic abstract interpretation, we demonstrate adversary models both approximate and sound. We apply the techniques to build a privacy-protecting monitor and describe how to trade off the precision and computational cost in its implementation while remaining sound with respect to privacy risk bounds.